00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 4081 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3671 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.148 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.149 The recommended git tool is: git 00:00:00.150 using credential 00000000-0000-0000-0000-000000000002 00:00:00.152 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.229 Fetching changes from the remote Git repository 00:00:00.232 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.302 Using shallow fetch with depth 1 00:00:00.302 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.302 > git --version # timeout=10 00:00:00.366 > git --version # 'git version 2.39.2' 00:00:00.366 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.398 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.398 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.025 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.038 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.052 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.052 > git config core.sparsecheckout # timeout=10 00:00:07.066 > git read-tree -mu HEAD # timeout=10 00:00:07.083 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.114 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.114 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.214 [Pipeline] Start of Pipeline 00:00:07.229 [Pipeline] library 00:00:07.231 Loading library shm_lib@master 00:00:07.231 Library shm_lib@master is cached. Copying from home. 00:00:07.247 [Pipeline] node 00:00:22.251 Still waiting to schedule task 00:00:22.252 Waiting for next available executor on ‘vagrant-vm-host’ 00:04:34.998 Running on VM-host-SM38 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:04:35.000 [Pipeline] { 00:04:35.012 [Pipeline] catchError 00:04:35.014 [Pipeline] { 00:04:35.029 [Pipeline] wrap 00:04:35.039 [Pipeline] { 00:04:35.049 [Pipeline] stage 00:04:35.051 [Pipeline] { (Prologue) 00:04:35.069 [Pipeline] echo 00:04:35.070 Node: VM-host-SM38 00:04:35.077 [Pipeline] cleanWs 00:04:35.132 [WS-CLEANUP] Deleting project workspace... 00:04:35.132 [WS-CLEANUP] Deferred wipeout is used... 00:04:35.140 [WS-CLEANUP] done 00:04:35.348 [Pipeline] setCustomBuildProperty 00:04:35.431 [Pipeline] httpRequest 00:04:35.761 [Pipeline] echo 00:04:35.763 Sorcerer 10.211.164.20 is alive 00:04:35.775 [Pipeline] retry 00:04:35.778 [Pipeline] { 00:04:35.793 [Pipeline] httpRequest 00:04:35.799 HttpMethod: GET 00:04:35.799 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:04:35.800 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:04:35.801 Response Code: HTTP/1.1 200 OK 00:04:35.802 Success: Status code 200 is in the accepted range: 200,404 00:04:35.803 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:04:36.103 [Pipeline] } 00:04:36.121 [Pipeline] // retry 00:04:36.131 [Pipeline] sh 00:04:36.431 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:04:36.452 [Pipeline] httpRequest 00:04:36.780 [Pipeline] echo 00:04:36.782 Sorcerer 10.211.164.20 is alive 00:04:36.793 [Pipeline] retry 00:04:36.795 [Pipeline] { 00:04:36.811 [Pipeline] httpRequest 00:04:36.817 HttpMethod: GET 00:04:36.817 URL: http://10.211.164.20/packages/spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:04:36.818 Sending request to url: http://10.211.164.20/packages/spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:04:36.819 Response Code: HTTP/1.1 200 OK 00:04:36.819 Success: Status code 200 is in the accepted range: 200,404 00:04:36.820 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:04:39.061 [Pipeline] } 00:04:39.083 [Pipeline] // retry 00:04:39.091 [Pipeline] sh 00:04:39.408 + tar --no-same-owner -xf spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:04:42.715 [Pipeline] sh 00:04:43.000 + git -C spdk log --oneline -n5 00:04:43.001 2f2acf4eb doc: move nvmf_tracing.md to tracing.md 00:04:43.001 5592070b3 doc: update nvmf_tracing.md 00:04:43.001 5ca6db5da nvme_spec: Add SPDK_NVME_IO_FLAGS_PRCHK_MASK 00:04:43.001 f7ce15267 bdev: Insert or overwrite metadata using bounce/accel buffer if NVMe PRACT is set 00:04:43.001 aa58c9e0b dif: Add spdk_dif_pi_format_get_size() to use for NVMe PRACT 00:04:43.022 [Pipeline] withCredentials 00:04:43.034 > git --version # timeout=10 00:04:43.048 > git --version # 'git version 2.39.2' 00:04:43.066 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:04:43.069 [Pipeline] { 00:04:43.091 [Pipeline] retry 00:04:43.096 [Pipeline] { 00:04:43.130 [Pipeline] sh 00:04:43.410 + git ls-remote http://dpdk.org/git/dpdk main 00:04:43.425 [Pipeline] } 00:04:43.444 [Pipeline] // retry 00:04:43.450 [Pipeline] } 00:04:43.468 [Pipeline] // withCredentials 00:04:43.481 [Pipeline] httpRequest 00:04:43.793 [Pipeline] echo 00:04:43.795 Sorcerer 10.211.164.20 is alive 00:04:43.804 [Pipeline] retry 00:04:43.807 [Pipeline] { 00:04:43.821 [Pipeline] httpRequest 00:04:43.826 HttpMethod: GET 00:04:43.827 URL: http://10.211.164.20/packages/dpdk_a4f455560f747af8cd7fa99db86e757b7ff2fe79.tar.gz 00:04:43.827 Sending request to url: http://10.211.164.20/packages/dpdk_a4f455560f747af8cd7fa99db86e757b7ff2fe79.tar.gz 00:04:43.829 Response Code: HTTP/1.1 200 OK 00:04:43.830 Success: Status code 200 is in the accepted range: 200,404 00:04:43.830 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk_a4f455560f747af8cd7fa99db86e757b7ff2fe79.tar.gz 00:04:45.055 [Pipeline] } 00:04:45.076 [Pipeline] // retry 00:04:45.087 [Pipeline] sh 00:04:45.366 + tar --no-same-owner -xf dpdk_a4f455560f747af8cd7fa99db86e757b7ff2fe79.tar.gz 00:04:47.277 [Pipeline] sh 00:04:47.557 + git -C dpdk log --oneline -n5 00:04:47.557 a4f455560f version: 24.11-rc4 00:04:47.557 0c81db5870 dts: remove leftover node methods 00:04:47.558 71eae7fe3e doc: correct definition of stats per queue feature 00:04:47.558 f2b1510f19 net/octeon_ep: replace use of word segregate 00:04:47.558 f81cdd729b test/hash: replace use of term segregate 00:04:47.576 [Pipeline] writeFile 00:04:47.593 [Pipeline] sh 00:04:47.881 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:04:47.894 [Pipeline] sh 00:04:48.175 + cat autorun-spdk.conf 00:04:48.175 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:48.175 SPDK_TEST_NVMF=1 00:04:48.175 SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:48.175 SPDK_TEST_USDT=1 00:04:48.175 SPDK_RUN_UBSAN=1 00:04:48.175 SPDK_TEST_NVMF_MDNS=1 00:04:48.175 NET_TYPE=virt 00:04:48.175 SPDK_JSONRPC_GO_CLIENT=1 00:04:48.175 SPDK_TEST_NATIVE_DPDK=main 00:04:48.175 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:04:48.175 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:48.183 RUN_NIGHTLY=1 00:04:48.185 [Pipeline] } 00:04:48.199 [Pipeline] // stage 00:04:48.213 [Pipeline] stage 00:04:48.215 [Pipeline] { (Run VM) 00:04:48.228 [Pipeline] sh 00:04:48.517 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:04:48.517 + echo 'Start stage prepare_nvme.sh' 00:04:48.517 Start stage prepare_nvme.sh 00:04:48.517 + [[ -n 1 ]] 00:04:48.517 + disk_prefix=ex1 00:04:48.517 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:04:48.517 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:04:48.518 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:04:48.518 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:48.518 ++ SPDK_TEST_NVMF=1 00:04:48.518 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:48.518 ++ SPDK_TEST_USDT=1 00:04:48.518 ++ SPDK_RUN_UBSAN=1 00:04:48.518 ++ SPDK_TEST_NVMF_MDNS=1 00:04:48.518 ++ NET_TYPE=virt 00:04:48.518 ++ SPDK_JSONRPC_GO_CLIENT=1 00:04:48.518 ++ SPDK_TEST_NATIVE_DPDK=main 00:04:48.518 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:04:48.518 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:48.518 ++ RUN_NIGHTLY=1 00:04:48.518 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:04:48.518 + nvme_files=() 00:04:48.518 + declare -A nvme_files 00:04:48.518 + backend_dir=/var/lib/libvirt/images/backends 00:04:48.518 + nvme_files['nvme.img']=5G 00:04:48.518 + nvme_files['nvme-cmb.img']=5G 00:04:48.518 + nvme_files['nvme-multi0.img']=4G 00:04:48.518 + nvme_files['nvme-multi1.img']=4G 00:04:48.518 + nvme_files['nvme-multi2.img']=4G 00:04:48.518 + nvme_files['nvme-openstack.img']=8G 00:04:48.518 + nvme_files['nvme-zns.img']=5G 00:04:48.518 + (( SPDK_TEST_NVME_PMR == 1 )) 00:04:48.518 + (( SPDK_TEST_FTL == 1 )) 00:04:48.518 + (( SPDK_TEST_NVME_FDP == 1 )) 00:04:48.518 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:04:48.518 + for nvme in "${!nvme_files[@]}" 00:04:48.518 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:04:48.518 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:04:48.518 + for nvme in "${!nvme_files[@]}" 00:04:48.518 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:04:48.518 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:04:48.518 + for nvme in "${!nvme_files[@]}" 00:04:48.518 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:04:48.518 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:04:48.518 + for nvme in "${!nvme_files[@]}" 00:04:48.518 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:04:48.518 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:04:48.518 + for nvme in "${!nvme_files[@]}" 00:04:48.518 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:04:48.518 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:04:48.518 + for nvme in "${!nvme_files[@]}" 00:04:48.518 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:04:48.778 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:04:48.778 + for nvme in "${!nvme_files[@]}" 00:04:48.778 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:04:48.778 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:04:48.778 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:04:48.778 + echo 'End stage prepare_nvme.sh' 00:04:48.778 End stage prepare_nvme.sh 00:04:48.792 [Pipeline] sh 00:04:49.078 + DISTRO=fedora39 00:04:49.078 + CPUS=10 00:04:49.078 + RAM=12288 00:04:49.078 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:04:49.078 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:04:49.078 00:04:49.078 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:04:49.078 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:04:49.078 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:04:49.078 HELP=0 00:04:49.078 DRY_RUN=0 00:04:49.078 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:04:49.078 NVME_DISKS_TYPE=nvme,nvme, 00:04:49.078 NVME_AUTO_CREATE=0 00:04:49.078 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:04:49.078 NVME_CMB=,, 00:04:49.078 NVME_PMR=,, 00:04:49.078 NVME_ZNS=,, 00:04:49.078 NVME_MS=,, 00:04:49.078 NVME_FDP=,, 00:04:49.078 SPDK_VAGRANT_DISTRO=fedora39 00:04:49.078 SPDK_VAGRANT_VMCPU=10 00:04:49.078 SPDK_VAGRANT_VMRAM=12288 00:04:49.078 SPDK_VAGRANT_PROVIDER=libvirt 00:04:49.078 SPDK_VAGRANT_HTTP_PROXY= 00:04:49.078 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:04:49.078 SPDK_OPENSTACK_NETWORK=0 00:04:49.078 VAGRANT_PACKAGE_BOX=0 00:04:49.078 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:04:49.078 FORCE_DISTRO=true 00:04:49.078 VAGRANT_BOX_VERSION= 00:04:49.078 EXTRA_VAGRANTFILES= 00:04:49.078 NIC_MODEL=e1000 00:04:49.078 00:04:49.078 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt' 00:04:49.078 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:04:51.653 Bringing machine 'default' up with 'libvirt' provider... 00:04:51.914 ==> default: Creating image (snapshot of base box volume). 00:04:51.914 ==> default: Creating domain with the following settings... 00:04:51.914 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732687631_142f5abbe5839617e070 00:04:51.914 ==> default: -- Domain type: kvm 00:04:51.914 ==> default: -- Cpus: 10 00:04:51.914 ==> default: -- Feature: acpi 00:04:51.914 ==> default: -- Feature: apic 00:04:51.914 ==> default: -- Feature: pae 00:04:51.914 ==> default: -- Memory: 12288M 00:04:51.914 ==> default: -- Memory Backing: hugepages: 00:04:51.914 ==> default: -- Management MAC: 00:04:51.914 ==> default: -- Loader: 00:04:51.914 ==> default: -- Nvram: 00:04:51.914 ==> default: -- Base box: spdk/fedora39 00:04:51.914 ==> default: -- Storage pool: default 00:04:51.914 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732687631_142f5abbe5839617e070.img (20G) 00:04:51.914 ==> default: -- Volume Cache: default 00:04:51.914 ==> default: -- Kernel: 00:04:51.914 ==> default: -- Initrd: 00:04:51.914 ==> default: -- Graphics Type: vnc 00:04:51.914 ==> default: -- Graphics Port: -1 00:04:51.914 ==> default: -- Graphics IP: 127.0.0.1 00:04:51.914 ==> default: -- Graphics Password: Not defined 00:04:51.914 ==> default: -- Video Type: cirrus 00:04:51.914 ==> default: -- Video VRAM: 9216 00:04:51.914 ==> default: -- Sound Type: 00:04:51.914 ==> default: -- Keymap: en-us 00:04:51.914 ==> default: -- TPM Path: 00:04:51.914 ==> default: -- INPUT: type=mouse, bus=ps2 00:04:51.914 ==> default: -- Command line args: 00:04:51.914 ==> default: -> value=-device, 00:04:51.914 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:04:51.914 ==> default: -> value=-drive, 00:04:51.914 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:04:51.914 ==> default: -> value=-device, 00:04:51.914 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:51.914 ==> default: -> value=-device, 00:04:51.914 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:04:51.914 ==> default: -> value=-drive, 00:04:51.914 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:04:51.914 ==> default: -> value=-device, 00:04:51.914 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:51.914 ==> default: -> value=-drive, 00:04:51.914 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:04:51.914 ==> default: -> value=-device, 00:04:51.914 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:51.914 ==> default: -> value=-drive, 00:04:51.914 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:04:51.914 ==> default: -> value=-device, 00:04:51.914 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:51.914 ==> default: Creating shared folders metadata... 00:04:51.914 ==> default: Starting domain. 00:04:53.299 ==> default: Waiting for domain to get an IP address... 00:05:08.206 ==> default: Waiting for SSH to become available... 00:05:08.206 ==> default: Configuring and enabling network interfaces... 00:05:10.773 default: SSH address: 192.168.121.210:22 00:05:10.773 default: SSH username: vagrant 00:05:10.773 default: SSH auth method: private key 00:05:12.676 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:05:19.231 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:05:23.413 ==> default: Mounting SSHFS shared folder... 00:05:24.790 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:05:24.790 ==> default: Checking Mount.. 00:05:25.762 ==> default: Folder Successfully Mounted! 00:05:25.762 00:05:25.762 SUCCESS! 00:05:25.762 00:05:25.762 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:05:25.762 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:05:25.762 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:05:25.762 00:05:25.775 [Pipeline] } 00:05:25.788 [Pipeline] // stage 00:05:25.797 [Pipeline] dir 00:05:25.798 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt 00:05:25.799 [Pipeline] { 00:05:25.813 [Pipeline] catchError 00:05:25.815 [Pipeline] { 00:05:25.826 [Pipeline] sh 00:05:26.098 + vagrant ssh-config --host vagrant 00:05:26.098 + sed -ne '/^Host/,$p' 00:05:26.098 + tee ssh_conf 00:05:28.623 Host vagrant 00:05:28.623 HostName 192.168.121.210 00:05:28.623 User vagrant 00:05:28.623 Port 22 00:05:28.623 UserKnownHostsFile /dev/null 00:05:28.623 StrictHostKeyChecking no 00:05:28.623 PasswordAuthentication no 00:05:28.623 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:05:28.623 IdentitiesOnly yes 00:05:28.623 LogLevel FATAL 00:05:28.623 ForwardAgent yes 00:05:28.623 ForwardX11 yes 00:05:28.623 00:05:28.638 [Pipeline] withEnv 00:05:28.640 [Pipeline] { 00:05:28.656 [Pipeline] sh 00:05:28.932 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:05:28.932 source /etc/os-release 00:05:28.932 [[ -e /image.version ]] && img=$(< /image.version) 00:05:28.932 # Minimal, systemd-like check. 00:05:28.932 if [[ -e /.dockerenv ]]; then 00:05:28.932 # Clear garbage from the node'\''s name: 00:05:28.932 # agt-er_autotest_547-896 -> autotest_547-896 00:05:28.932 # $HOSTNAME is the actual container id 00:05:28.932 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:05:28.932 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:05:28.932 # We can assume this is a mount from a host where container is running, 00:05:28.932 # so fetch its hostname to easily identify the target swarm worker. 00:05:28.932 container="$(< /etc/hostname) ($agent)" 00:05:28.932 else 00:05:28.932 # Fallback 00:05:28.932 container=$agent 00:05:28.932 fi 00:05:28.932 fi 00:05:28.932 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:05:28.932 ' 00:05:28.942 [Pipeline] } 00:05:28.961 [Pipeline] // withEnv 00:05:28.969 [Pipeline] setCustomBuildProperty 00:05:28.984 [Pipeline] stage 00:05:28.986 [Pipeline] { (Tests) 00:05:29.003 [Pipeline] sh 00:05:29.324 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:05:29.338 [Pipeline] sh 00:05:29.617 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:05:29.633 [Pipeline] timeout 00:05:29.633 Timeout set to expire in 1 hr 0 min 00:05:29.636 [Pipeline] { 00:05:29.651 [Pipeline] sh 00:05:29.929 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:05:30.502 HEAD is now at 2f2acf4eb doc: move nvmf_tracing.md to tracing.md 00:05:30.517 [Pipeline] sh 00:05:30.795 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:05:30.810 [Pipeline] sh 00:05:31.089 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:05:31.105 [Pipeline] sh 00:05:31.384 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo' 00:05:31.384 ++ readlink -f spdk_repo 00:05:31.384 + DIR_ROOT=/home/vagrant/spdk_repo 00:05:31.384 + [[ -n /home/vagrant/spdk_repo ]] 00:05:31.384 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:05:31.384 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:05:31.384 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:05:31.384 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:05:31.384 + [[ -d /home/vagrant/spdk_repo/output ]] 00:05:31.384 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:05:31.384 + cd /home/vagrant/spdk_repo 00:05:31.384 + source /etc/os-release 00:05:31.384 ++ NAME='Fedora Linux' 00:05:31.384 ++ VERSION='39 (Cloud Edition)' 00:05:31.384 ++ ID=fedora 00:05:31.384 ++ VERSION_ID=39 00:05:31.384 ++ VERSION_CODENAME= 00:05:31.384 ++ PLATFORM_ID=platform:f39 00:05:31.384 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:05:31.384 ++ ANSI_COLOR='0;38;2;60;110;180' 00:05:31.384 ++ LOGO=fedora-logo-icon 00:05:31.384 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:05:31.384 ++ HOME_URL=https://fedoraproject.org/ 00:05:31.384 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:05:31.384 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:05:31.384 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:05:31.384 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:05:31.384 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:05:31.384 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:05:31.384 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:05:31.384 ++ SUPPORT_END=2024-11-12 00:05:31.384 ++ VARIANT='Cloud Edition' 00:05:31.384 ++ VARIANT_ID=cloud 00:05:31.384 + uname -a 00:05:31.384 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:05:31.384 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:31.948 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:31.948 Hugepages 00:05:31.948 node hugesize free / total 00:05:31.948 node0 1048576kB 0 / 0 00:05:31.948 node0 2048kB 0 / 0 00:05:31.948 00:05:31.948 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:31.948 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:31.948 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:31.948 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:05:31.948 + rm -f /tmp/spdk-ld-path 00:05:31.948 + source autorun-spdk.conf 00:05:31.948 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:31.948 ++ SPDK_TEST_NVMF=1 00:05:31.948 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:31.948 ++ SPDK_TEST_USDT=1 00:05:31.948 ++ SPDK_RUN_UBSAN=1 00:05:31.948 ++ SPDK_TEST_NVMF_MDNS=1 00:05:31.948 ++ NET_TYPE=virt 00:05:31.948 ++ SPDK_JSONRPC_GO_CLIENT=1 00:05:31.948 ++ SPDK_TEST_NATIVE_DPDK=main 00:05:31.948 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:05:31.948 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:31.948 ++ RUN_NIGHTLY=1 00:05:31.948 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:05:31.948 + [[ -n '' ]] 00:05:31.948 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:05:32.207 + for M in /var/spdk/build-*-manifest.txt 00:05:32.207 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:05:32.207 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:32.207 + for M in /var/spdk/build-*-manifest.txt 00:05:32.207 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:05:32.207 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:32.207 + for M in /var/spdk/build-*-manifest.txt 00:05:32.207 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:05:32.207 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:32.207 ++ uname 00:05:32.207 + [[ Linux == \L\i\n\u\x ]] 00:05:32.207 + sudo dmesg -T 00:05:32.207 + sudo dmesg --clear 00:05:32.207 + dmesg_pid=5748 00:05:32.207 + [[ Fedora Linux == FreeBSD ]] 00:05:32.207 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:32.207 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:32.207 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:05:32.207 + [[ -x /usr/src/fio-static/fio ]] 00:05:32.207 + sudo dmesg -Tw 00:05:32.207 + export FIO_BIN=/usr/src/fio-static/fio 00:05:32.207 + FIO_BIN=/usr/src/fio-static/fio 00:05:32.207 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:05:32.207 + [[ ! -v VFIO_QEMU_BIN ]] 00:05:32.207 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:05:32.207 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:32.207 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:32.207 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:05:32.207 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:32.207 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:32.207 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:32.207 06:07:51 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:05:32.207 06:07:51 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:32.207 06:07:51 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:32.207 06:07:51 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:05:32.207 06:07:51 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:32.207 06:07:51 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_USDT=1 00:05:32.207 06:07:51 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_UBSAN=1 00:05:32.207 06:07:51 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_TEST_NVMF_MDNS=1 00:05:32.207 06:07:51 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:05:32.207 06:07:51 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_JSONRPC_GO_CLIENT=1 00:05:32.207 06:07:51 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=main 00:05:32.207 06:07:51 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:05:32.207 06:07:51 -- spdk_repo/autorun-spdk.conf@11 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:32.207 06:07:51 -- spdk_repo/autorun-spdk.conf@12 -- $ RUN_NIGHTLY=1 00:05:32.207 06:07:51 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:05:32.207 06:07:51 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:32.207 06:07:51 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:05:32.207 06:07:51 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:32.207 06:07:51 -- scripts/common.sh@15 -- $ shopt -s extglob 00:05:32.207 06:07:51 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:05:32.207 06:07:51 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:32.207 06:07:51 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:32.207 06:07:51 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.207 06:07:51 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.207 06:07:51 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.207 06:07:51 -- paths/export.sh@5 -- $ export PATH 00:05:32.207 06:07:51 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.207 06:07:51 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:05:32.207 06:07:51 -- common/autobuild_common.sh@493 -- $ date +%s 00:05:32.207 06:07:51 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732687671.XXXXXX 00:05:32.207 06:07:51 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732687671.Y62WIv 00:05:32.207 06:07:51 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:05:32.207 06:07:51 -- common/autobuild_common.sh@499 -- $ '[' -n main ']' 00:05:32.207 06:07:51 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:05:32.207 06:07:51 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:05:32.207 06:07:51 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:05:32.207 06:07:51 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:05:32.207 06:07:51 -- common/autobuild_common.sh@509 -- $ get_config_params 00:05:32.207 06:07:51 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:05:32.207 06:07:51 -- common/autotest_common.sh@10 -- $ set +x 00:05:32.466 06:07:51 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:05:32.466 06:07:51 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:05:32.466 06:07:51 -- pm/common@17 -- $ local monitor 00:05:32.466 06:07:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:32.466 06:07:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:32.466 06:07:51 -- pm/common@25 -- $ sleep 1 00:05:32.466 06:07:51 -- pm/common@21 -- $ date +%s 00:05:32.466 06:07:51 -- pm/common@21 -- $ date +%s 00:05:32.466 06:07:51 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732687671 00:05:32.466 06:07:51 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732687671 00:05:32.466 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732687671_collect-vmstat.pm.log 00:05:32.466 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732687671_collect-cpu-load.pm.log 00:05:33.402 06:07:52 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:05:33.402 06:07:52 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:05:33.402 06:07:52 -- spdk/autobuild.sh@12 -- $ umask 022 00:05:33.402 06:07:52 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:05:33.402 06:07:52 -- spdk/autobuild.sh@16 -- $ date -u 00:05:33.402 Wed Nov 27 06:07:52 AM UTC 2024 00:05:33.402 06:07:52 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:05:33.402 v25.01-pre-271-g2f2acf4eb 00:05:33.402 06:07:52 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:05:33.402 06:07:52 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:05:33.402 06:07:52 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:05:33.402 06:07:52 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:33.402 06:07:52 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:33.402 06:07:52 -- common/autotest_common.sh@10 -- $ set +x 00:05:33.402 ************************************ 00:05:33.402 START TEST ubsan 00:05:33.402 ************************************ 00:05:33.402 using ubsan 00:05:33.402 06:07:52 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:05:33.402 00:05:33.402 real 0m0.000s 00:05:33.402 user 0m0.000s 00:05:33.402 sys 0m0.000s 00:05:33.402 06:07:52 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:33.402 06:07:52 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:05:33.402 ************************************ 00:05:33.402 END TEST ubsan 00:05:33.402 ************************************ 00:05:33.402 06:07:52 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:05:33.402 06:07:52 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:05:33.402 06:07:52 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:05:33.402 06:07:52 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:05:33.402 06:07:52 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:33.402 06:07:52 -- common/autotest_common.sh@10 -- $ set +x 00:05:33.402 ************************************ 00:05:33.402 START TEST build_native_dpdk 00:05:33.402 ************************************ 00:05:33.402 06:07:52 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:05:33.402 a4f455560f version: 24.11-rc4 00:05:33.402 0c81db5870 dts: remove leftover node methods 00:05:33.402 71eae7fe3e doc: correct definition of stats per queue feature 00:05:33.402 f2b1510f19 net/octeon_ep: replace use of word segregate 00:05:33.402 f81cdd729b test/hash: replace use of term segregate 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.11.0-rc4 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 24.11.0-rc4 21.11.0 00:05:33.402 06:07:52 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc4 '<' 21.11.0 00:05:33.402 06:07:52 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:05:33.402 06:07:52 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:05:33.402 06:07:52 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:05:33.402 06:07:52 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:05:33.402 06:07:52 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:05:33.402 06:07:52 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:05:33.402 06:07:52 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:05:33.402 06:07:52 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:05:33.402 06:07:52 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:05:33.402 06:07:52 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:05:33.402 06:07:52 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:05:33.402 06:07:52 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:05:33.402 06:07:52 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:05:33.402 06:07:52 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.402 06:07:52 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:05:33.402 06:07:52 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:05:33.402 06:07:52 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:05:33.402 06:07:52 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:05:33.402 06:07:52 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:05:33.402 06:07:52 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:05:33.402 06:07:52 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:05:33.402 06:07:52 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:05:33.402 06:07:52 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:05:33.402 06:07:52 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:05:33.402 06:07:52 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:05:33.402 06:07:52 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:05:33.402 06:07:52 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:05:33.402 patching file config/rte_config.h 00:05:33.403 Hunk #1 succeeded at 72 (offset 13 lines). 00:05:33.403 06:07:52 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 24.11.0-rc4 24.07.0 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc4 '<' 24.07.0 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:05:33.403 06:07:52 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 24.11.0-rc4 24.07.0 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 24.11.0-rc4 '>=' 24.07.0 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:05:33.403 06:07:52 build_native_dpdk -- scripts/common.sh@367 -- $ return 0 00:05:33.403 06:07:52 build_native_dpdk -- common/autobuild_common.sh@187 -- $ patch -p1 00:05:33.403 patching file drivers/bus/pci/linux/pci_uio.c 00:05:33.403 06:07:52 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:05:33.403 06:07:52 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:05:33.403 06:07:52 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:05:33.403 06:07:52 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:05:33.403 06:07:52 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:05:38.673 The Meson build system 00:05:38.673 Version: 1.5.0 00:05:38.673 Source dir: /home/vagrant/spdk_repo/dpdk 00:05:38.673 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:05:38.673 Build type: native build 00:05:38.673 Project name: DPDK 00:05:38.673 Project version: 24.11.0-rc4 00:05:38.673 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:38.673 C linker for the host machine: gcc ld.bfd 2.40-14 00:05:38.673 Host machine cpu family: x86_64 00:05:38.673 Host machine cpu: x86_64 00:05:38.673 Message: ## Building in Developer Mode ## 00:05:38.673 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:38.673 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:05:38.673 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:05:38.673 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:05:38.673 Program cat found: YES (/usr/bin/cat) 00:05:38.673 config/meson.build:122: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:05:38.673 Compiler for C supports arguments -march=native: YES 00:05:38.673 Checking for size of "void *" : 8 00:05:38.673 Checking for size of "void *" : 8 (cached) 00:05:38.673 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:05:38.673 Library m found: YES 00:05:38.673 Library numa found: YES 00:05:38.673 Has header "numaif.h" : YES 00:05:38.673 Library fdt found: NO 00:05:38.673 Library execinfo found: NO 00:05:38.673 Has header "execinfo.h" : YES 00:05:38.673 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:38.673 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:38.673 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:38.673 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:38.673 Run-time dependency openssl found: YES 3.1.1 00:05:38.673 Run-time dependency libpcap found: YES 1.10.4 00:05:38.673 Has header "pcap.h" with dependency libpcap: YES 00:05:38.673 Compiler for C supports arguments -Wcast-qual: YES 00:05:38.673 Compiler for C supports arguments -Wdeprecated: YES 00:05:38.673 Compiler for C supports arguments -Wformat: YES 00:05:38.673 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:38.673 Compiler for C supports arguments -Wformat-security: NO 00:05:38.673 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:38.673 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:38.673 Compiler for C supports arguments -Wnested-externs: YES 00:05:38.673 Compiler for C supports arguments -Wold-style-definition: YES 00:05:38.673 Compiler for C supports arguments -Wpointer-arith: YES 00:05:38.673 Compiler for C supports arguments -Wsign-compare: YES 00:05:38.673 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:38.673 Compiler for C supports arguments -Wundef: YES 00:05:38.673 Compiler for C supports arguments -Wwrite-strings: YES 00:05:38.673 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:38.673 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:38.673 Program objdump found: YES (/usr/bin/objdump) 00:05:38.673 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512dq -mavx512bw: YES 00:05:38.673 Checking if "AVX512 checking" compiles: YES 00:05:38.673 Fetching value of define "__AVX512F__" : 1 00:05:38.673 Fetching value of define "__AVX512BW__" : 1 00:05:38.673 Fetching value of define "__AVX512DQ__" : 1 00:05:38.673 Fetching value of define "__AVX512VL__" : 1 00:05:38.673 Fetching value of define "__SSE4_2__" : 1 00:05:38.673 Fetching value of define "__AES__" : 1 00:05:38.673 Fetching value of define "__AVX__" : 1 00:05:38.673 Fetching value of define "__AVX2__" : 1 00:05:38.673 Fetching value of define "__AVX512BW__" : 1 00:05:38.673 Fetching value of define "__AVX512CD__" : 1 00:05:38.673 Fetching value of define "__AVX512DQ__" : 1 00:05:38.673 Fetching value of define "__AVX512F__" : 1 00:05:38.673 Fetching value of define "__AVX512VL__" : 1 00:05:38.673 Fetching value of define "__PCLMUL__" : 1 00:05:38.673 Fetching value of define "__RDRND__" : 1 00:05:38.673 Fetching value of define "__RDSEED__" : 1 00:05:38.673 Fetching value of define "__VPCLMULQDQ__" : 1 00:05:38.673 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:38.673 Message: lib/log: Defining dependency "log" 00:05:38.673 Message: lib/kvargs: Defining dependency "kvargs" 00:05:38.673 Message: lib/argparse: Defining dependency "argparse" 00:05:38.673 Message: lib/telemetry: Defining dependency "telemetry" 00:05:38.673 Checking for function "pthread_attr_setaffinity_np" : YES 00:05:38.673 Checking for function "getentropy" : NO 00:05:38.673 Message: lib/eal: Defining dependency "eal" 00:05:38.673 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:05:38.673 Message: lib/ring: Defining dependency "ring" 00:05:38.673 Message: lib/rcu: Defining dependency "rcu" 00:05:38.673 Message: lib/mempool: Defining dependency "mempool" 00:05:38.673 Message: lib/mbuf: Defining dependency "mbuf" 00:05:38.673 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:38.674 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:05:38.674 Compiler for C supports arguments -mpclmul: YES 00:05:38.674 Compiler for C supports arguments -maes: YES 00:05:38.674 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:38.674 Message: lib/net: Defining dependency "net" 00:05:38.674 Message: lib/meter: Defining dependency "meter" 00:05:38.674 Message: lib/ethdev: Defining dependency "ethdev" 00:05:38.674 Message: lib/pci: Defining dependency "pci" 00:05:38.674 Message: lib/cmdline: Defining dependency "cmdline" 00:05:38.674 Message: lib/metrics: Defining dependency "metrics" 00:05:38.674 Message: lib/hash: Defining dependency "hash" 00:05:38.674 Message: lib/timer: Defining dependency "timer" 00:05:38.674 Fetching value of define "__AVX512F__" : 1 (cached) 00:05:38.674 Fetching value of define "__AVX512VL__" : 1 (cached) 00:05:38.674 Fetching value of define "__AVX512CD__" : 1 (cached) 00:05:38.674 Fetching value of define "__AVX512BW__" : 1 (cached) 00:05:38.674 Message: lib/acl: Defining dependency "acl" 00:05:38.674 Message: lib/bbdev: Defining dependency "bbdev" 00:05:38.674 Message: lib/bitratestats: Defining dependency "bitratestats" 00:05:38.674 Run-time dependency libelf found: YES 0.191 00:05:38.674 Message: lib/bpf: Defining dependency "bpf" 00:05:38.674 Message: lib/cfgfile: Defining dependency "cfgfile" 00:05:38.674 Message: lib/compressdev: Defining dependency "compressdev" 00:05:38.674 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:38.674 Message: lib/distributor: Defining dependency "distributor" 00:05:38.674 Message: lib/dmadev: Defining dependency "dmadev" 00:05:38.674 Message: lib/efd: Defining dependency "efd" 00:05:38.674 Message: lib/eventdev: Defining dependency "eventdev" 00:05:38.674 Message: lib/dispatcher: Defining dependency "dispatcher" 00:05:38.674 Message: lib/gpudev: Defining dependency "gpudev" 00:05:38.674 Message: lib/gro: Defining dependency "gro" 00:05:38.674 Message: lib/gso: Defining dependency "gso" 00:05:38.674 Message: lib/ip_frag: Defining dependency "ip_frag" 00:05:38.674 Message: lib/jobstats: Defining dependency "jobstats" 00:05:38.674 Message: lib/latencystats: Defining dependency "latencystats" 00:05:38.674 Message: lib/lpm: Defining dependency "lpm" 00:05:38.674 Fetching value of define "__AVX512F__" : 1 (cached) 00:05:38.674 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:05:38.674 Fetching value of define "__AVX512IFMA__" : 1 00:05:38.674 Message: lib/member: Defining dependency "member" 00:05:38.674 Message: lib/pcapng: Defining dependency "pcapng" 00:05:38.674 Message: lib/power: Defining dependency "power" 00:05:38.674 Message: lib/rawdev: Defining dependency "rawdev" 00:05:38.674 Message: lib/regexdev: Defining dependency "regexdev" 00:05:38.674 Message: lib/mldev: Defining dependency "mldev" 00:05:38.674 Message: lib/rib: Defining dependency "rib" 00:05:38.674 Message: lib/reorder: Defining dependency "reorder" 00:05:38.674 Message: lib/sched: Defining dependency "sched" 00:05:38.674 Message: lib/security: Defining dependency "security" 00:05:38.674 Message: lib/stack: Defining dependency "stack" 00:05:38.674 Has header "linux/userfaultfd.h" : YES 00:05:38.674 Has header "linux/vduse.h" : YES 00:05:38.674 Message: lib/vhost: Defining dependency "vhost" 00:05:38.674 Message: lib/ipsec: Defining dependency "ipsec" 00:05:38.674 Message: lib/pdcp: Defining dependency "pdcp" 00:05:38.674 Message: lib/fib: Defining dependency "fib" 00:05:38.674 Message: lib/port: Defining dependency "port" 00:05:38.674 Message: lib/pdump: Defining dependency "pdump" 00:05:38.674 Message: lib/table: Defining dependency "table" 00:05:38.674 Message: lib/pipeline: Defining dependency "pipeline" 00:05:38.674 Message: lib/graph: Defining dependency "graph" 00:05:38.674 Message: lib/node: Defining dependency "node" 00:05:38.674 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:38.674 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:38.674 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:38.674 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:38.674 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:38.674 Compiler for C supports arguments -Wno-sign-compare: YES 00:05:38.674 Compiler for C supports arguments -Wno-unused-value: YES 00:05:38.674 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:05:38.674 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:05:38.674 Compiler for C supports arguments -Wno-unused-parameter: YES 00:05:38.674 Compiler for C supports arguments -march=skylake-avx512: YES 00:05:38.674 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:05:38.674 Message: drivers/power/acpi: Defining dependency "power_acpi" 00:05:38.674 Message: drivers/power/amd_pstate: Defining dependency "power_amd_pstate" 00:05:38.674 Message: drivers/power/cppc: Defining dependency "power_cppc" 00:05:38.674 Message: drivers/power/intel_pstate: Defining dependency "power_intel_pstate" 00:05:38.674 Message: drivers/power/intel_uncore: Defining dependency "power_intel_uncore" 00:05:38.674 Message: drivers/power/kvm_vm: Defining dependency "power_kvm_vm" 00:05:38.674 Has header "sys/epoll.h" : YES 00:05:38.674 Program doxygen found: YES (/usr/local/bin/doxygen) 00:05:38.674 Configuring doxy-api-html.conf using configuration 00:05:38.674 Configuring doxy-api-man.conf using configuration 00:05:38.674 Program mandb found: YES (/usr/bin/mandb) 00:05:38.674 Program sphinx-build found: NO 00:05:38.674 Program sphinx-build found: NO 00:05:38.674 Configuring rte_build_config.h using configuration 00:05:38.674 Message: 00:05:38.674 ================= 00:05:38.674 Applications Enabled 00:05:38.674 ================= 00:05:38.674 00:05:38.674 apps: 00:05:38.674 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:05:38.674 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:05:38.674 test-pmd, test-regex, test-sad, test-security-perf, 00:05:38.674 00:05:38.674 Message: 00:05:38.674 ================= 00:05:38.674 Libraries Enabled 00:05:38.674 ================= 00:05:38.674 00:05:38.674 libs: 00:05:38.674 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:05:38.674 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:05:38.674 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:05:38.674 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:05:38.674 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:05:38.674 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:05:38.674 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:05:38.674 graph, node, 00:05:38.674 00:05:38.674 Message: 00:05:38.674 =============== 00:05:38.674 Drivers Enabled 00:05:38.674 =============== 00:05:38.674 00:05:38.674 common: 00:05:38.674 00:05:38.674 bus: 00:05:38.674 pci, vdev, 00:05:38.674 mempool: 00:05:38.674 ring, 00:05:38.674 dma: 00:05:38.674 00:05:38.674 net: 00:05:38.674 i40e, 00:05:38.674 raw: 00:05:38.674 00:05:38.674 crypto: 00:05:38.674 00:05:38.674 compress: 00:05:38.674 00:05:38.674 regex: 00:05:38.674 00:05:38.674 ml: 00:05:38.674 00:05:38.674 vdpa: 00:05:38.674 00:05:38.674 event: 00:05:38.674 00:05:38.674 baseband: 00:05:38.674 00:05:38.674 gpu: 00:05:38.674 00:05:38.674 power: 00:05:38.674 acpi, amd_pstate, cppc, intel_pstate, intel_uncore, kvm_vm, 00:05:38.674 00:05:38.674 Message: 00:05:38.674 ================= 00:05:38.674 Content Skipped 00:05:38.674 ================= 00:05:38.674 00:05:38.674 apps: 00:05:38.674 00:05:38.674 libs: 00:05:38.674 00:05:38.674 drivers: 00:05:38.674 common/cpt: not in enabled drivers build config 00:05:38.674 common/dpaax: not in enabled drivers build config 00:05:38.674 common/iavf: not in enabled drivers build config 00:05:38.674 common/idpf: not in enabled drivers build config 00:05:38.674 common/ionic: not in enabled drivers build config 00:05:38.674 common/mvep: not in enabled drivers build config 00:05:38.674 common/octeontx: not in enabled drivers build config 00:05:38.674 bus/auxiliary: not in enabled drivers build config 00:05:38.674 bus/cdx: not in enabled drivers build config 00:05:38.674 bus/dpaa: not in enabled drivers build config 00:05:38.674 bus/fslmc: not in enabled drivers build config 00:05:38.674 bus/ifpga: not in enabled drivers build config 00:05:38.674 bus/platform: not in enabled drivers build config 00:05:38.674 bus/uacce: not in enabled drivers build config 00:05:38.674 bus/vmbus: not in enabled drivers build config 00:05:38.674 common/cnxk: not in enabled drivers build config 00:05:38.674 common/mlx5: not in enabled drivers build config 00:05:38.674 common/nfp: not in enabled drivers build config 00:05:38.674 common/nitrox: not in enabled drivers build config 00:05:38.674 common/qat: not in enabled drivers build config 00:05:38.674 common/sfc_efx: not in enabled drivers build config 00:05:38.674 mempool/bucket: not in enabled drivers build config 00:05:38.674 mempool/cnxk: not in enabled drivers build config 00:05:38.674 mempool/dpaa: not in enabled drivers build config 00:05:38.674 mempool/dpaa2: not in enabled drivers build config 00:05:38.674 mempool/octeontx: not in enabled drivers build config 00:05:38.674 mempool/stack: not in enabled drivers build config 00:05:38.674 dma/cnxk: not in enabled drivers build config 00:05:38.674 dma/dpaa: not in enabled drivers build config 00:05:38.674 dma/dpaa2: not in enabled drivers build config 00:05:38.674 dma/hisilicon: not in enabled drivers build config 00:05:38.674 dma/idxd: not in enabled drivers build config 00:05:38.674 dma/ioat: not in enabled drivers build config 00:05:38.674 dma/odm: not in enabled drivers build config 00:05:38.674 dma/skeleton: not in enabled drivers build config 00:05:38.674 net/af_packet: not in enabled drivers build config 00:05:38.674 net/af_xdp: not in enabled drivers build config 00:05:38.674 net/ark: not in enabled drivers build config 00:05:38.674 net/atlantic: not in enabled drivers build config 00:05:38.674 net/avp: not in enabled drivers build config 00:05:38.674 net/axgbe: not in enabled drivers build config 00:05:38.674 net/bnx2x: not in enabled drivers build config 00:05:38.674 net/bnxt: not in enabled drivers build config 00:05:38.674 net/bonding: not in enabled drivers build config 00:05:38.674 net/cnxk: not in enabled drivers build config 00:05:38.674 net/cpfl: not in enabled drivers build config 00:05:38.674 net/cxgbe: not in enabled drivers build config 00:05:38.675 net/dpaa: not in enabled drivers build config 00:05:38.675 net/dpaa2: not in enabled drivers build config 00:05:38.675 net/e1000: not in enabled drivers build config 00:05:38.675 net/ena: not in enabled drivers build config 00:05:38.675 net/enetc: not in enabled drivers build config 00:05:38.675 net/enetfec: not in enabled drivers build config 00:05:38.675 net/enic: not in enabled drivers build config 00:05:38.675 net/failsafe: not in enabled drivers build config 00:05:38.675 net/fm10k: not in enabled drivers build config 00:05:38.675 net/gve: not in enabled drivers build config 00:05:38.675 net/hinic: not in enabled drivers build config 00:05:38.675 net/hns3: not in enabled drivers build config 00:05:38.675 net/iavf: not in enabled drivers build config 00:05:38.675 net/ice: not in enabled drivers build config 00:05:38.675 net/idpf: not in enabled drivers build config 00:05:38.675 net/igc: not in enabled drivers build config 00:05:38.675 net/ionic: not in enabled drivers build config 00:05:38.675 net/ipn3ke: not in enabled drivers build config 00:05:38.675 net/ixgbe: not in enabled drivers build config 00:05:38.675 net/mana: not in enabled drivers build config 00:05:38.675 net/memif: not in enabled drivers build config 00:05:38.675 net/mlx4: not in enabled drivers build config 00:05:38.675 net/mlx5: not in enabled drivers build config 00:05:38.675 net/mvneta: not in enabled drivers build config 00:05:38.675 net/mvpp2: not in enabled drivers build config 00:05:38.675 net/netvsc: not in enabled drivers build config 00:05:38.675 net/nfb: not in enabled drivers build config 00:05:38.675 net/nfp: not in enabled drivers build config 00:05:38.675 net/ngbe: not in enabled drivers build config 00:05:38.675 net/ntnic: not in enabled drivers build config 00:05:38.675 net/null: not in enabled drivers build config 00:05:38.675 net/octeontx: not in enabled drivers build config 00:05:38.675 net/octeon_ep: not in enabled drivers build config 00:05:38.675 net/pcap: not in enabled drivers build config 00:05:38.675 net/pfe: not in enabled drivers build config 00:05:38.675 net/qede: not in enabled drivers build config 00:05:38.675 net/r8169: not in enabled drivers build config 00:05:38.675 net/ring: not in enabled drivers build config 00:05:38.675 net/sfc: not in enabled drivers build config 00:05:38.675 net/softnic: not in enabled drivers build config 00:05:38.675 net/tap: not in enabled drivers build config 00:05:38.675 net/thunderx: not in enabled drivers build config 00:05:38.675 net/txgbe: not in enabled drivers build config 00:05:38.675 net/vdev_netvsc: not in enabled drivers build config 00:05:38.675 net/vhost: not in enabled drivers build config 00:05:38.675 net/virtio: not in enabled drivers build config 00:05:38.675 net/vmxnet3: not in enabled drivers build config 00:05:38.675 net/zxdh: not in enabled drivers build config 00:05:38.675 raw/cnxk_bphy: not in enabled drivers build config 00:05:38.675 raw/cnxk_gpio: not in enabled drivers build config 00:05:38.675 raw/cnxk_rvu_lf: not in enabled drivers build config 00:05:38.675 raw/dpaa2_cmdif: not in enabled drivers build config 00:05:38.675 raw/gdtc: not in enabled drivers build config 00:05:38.675 raw/ifpga: not in enabled drivers build config 00:05:38.675 raw/ntb: not in enabled drivers build config 00:05:38.675 raw/skeleton: not in enabled drivers build config 00:05:38.675 crypto/armv8: not in enabled drivers build config 00:05:38.675 crypto/bcmfs: not in enabled drivers build config 00:05:38.675 crypto/caam_jr: not in enabled drivers build config 00:05:38.675 crypto/ccp: not in enabled drivers build config 00:05:38.675 crypto/cnxk: not in enabled drivers build config 00:05:38.675 crypto/dpaa_sec: not in enabled drivers build config 00:05:38.675 crypto/dpaa2_sec: not in enabled drivers build config 00:05:38.675 crypto/ionic: not in enabled drivers build config 00:05:38.675 crypto/ipsec_mb: not in enabled drivers build config 00:05:38.675 crypto/mlx5: not in enabled drivers build config 00:05:38.675 crypto/mvsam: not in enabled drivers build config 00:05:38.675 crypto/nitrox: not in enabled drivers build config 00:05:38.675 crypto/null: not in enabled drivers build config 00:05:38.675 crypto/octeontx: not in enabled drivers build config 00:05:38.675 crypto/openssl: not in enabled drivers build config 00:05:38.675 crypto/scheduler: not in enabled drivers build config 00:05:38.675 crypto/uadk: not in enabled drivers build config 00:05:38.675 crypto/virtio: not in enabled drivers build config 00:05:38.675 compress/isal: not in enabled drivers build config 00:05:38.675 compress/mlx5: not in enabled drivers build config 00:05:38.675 compress/nitrox: not in enabled drivers build config 00:05:38.675 compress/octeontx: not in enabled drivers build config 00:05:38.675 compress/uadk: not in enabled drivers build config 00:05:38.675 compress/zlib: not in enabled drivers build config 00:05:38.675 regex/mlx5: not in enabled drivers build config 00:05:38.675 regex/cn9k: not in enabled drivers build config 00:05:38.675 ml/cnxk: not in enabled drivers build config 00:05:38.675 vdpa/ifc: not in enabled drivers build config 00:05:38.675 vdpa/mlx5: not in enabled drivers build config 00:05:38.675 vdpa/nfp: not in enabled drivers build config 00:05:38.675 vdpa/sfc: not in enabled drivers build config 00:05:38.675 event/cnxk: not in enabled drivers build config 00:05:38.675 event/dlb2: not in enabled drivers build config 00:05:38.675 event/dpaa: not in enabled drivers build config 00:05:38.675 event/dpaa2: not in enabled drivers build config 00:05:38.675 event/dsw: not in enabled drivers build config 00:05:38.675 event/opdl: not in enabled drivers build config 00:05:38.675 event/skeleton: not in enabled drivers build config 00:05:38.675 event/sw: not in enabled drivers build config 00:05:38.675 event/octeontx: not in enabled drivers build config 00:05:38.675 baseband/acc: not in enabled drivers build config 00:05:38.675 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:05:38.675 baseband/fpga_lte_fec: not in enabled drivers build config 00:05:38.675 baseband/la12xx: not in enabled drivers build config 00:05:38.675 baseband/null: not in enabled drivers build config 00:05:38.675 baseband/turbo_sw: not in enabled drivers build config 00:05:38.675 gpu/cuda: not in enabled drivers build config 00:05:38.675 power/amd_uncore: not in enabled drivers build config 00:05:38.675 00:05:38.675 00:05:38.675 Message: DPDK build config complete: 00:05:38.675 source path = "/home/vagrant/spdk_repo/dpdk" 00:05:38.675 build path = "/home/vagrant/spdk_repo/dpdk/build-tmp" 00:05:38.675 Build targets in project: 244 00:05:38.675 00:05:38.675 DPDK 24.11.0-rc4 00:05:38.675 00:05:38.675 User defined options 00:05:38.675 libdir : lib 00:05:38.675 prefix : /home/vagrant/spdk_repo/dpdk/build 00:05:38.675 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:05:38.675 c_link_args : 00:05:38.675 enable_docs : false 00:05:38.675 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:05:38.675 enable_kmods : false 00:05:39.616 machine : native 00:05:39.616 tests : false 00:05:39.616 00:05:39.616 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:39.616 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:05:39.880 06:07:59 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:05:39.880 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:05:39.880 [1/764] Compiling C object lib/librte_log.a.p/log_log_syslog.c.o 00:05:39.880 [2/764] Compiling C object lib/librte_log.a.p/log_log_journal.c.o 00:05:39.880 [3/764] Compiling C object lib/librte_log.a.p/log_log_color.c.o 00:05:39.880 [4/764] Compiling C object lib/librte_log.a.p/log_log_timestamp.c.o 00:05:39.880 [5/764] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:39.880 [6/764] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:39.880 [7/764] Linking static target lib/librte_kvargs.a 00:05:40.142 [8/764] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:40.142 [9/764] Linking static target lib/librte_log.a 00:05:40.142 [10/764] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:05:40.142 [11/764] Linking static target lib/librte_argparse.a 00:05:40.142 [12/764] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:40.142 [13/764] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:40.142 [14/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:40.142 [15/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:40.142 [16/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:40.142 [17/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:40.142 [18/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:40.402 [19/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:40.402 [20/764] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:05:40.402 [21/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:40.402 [22/764] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:40.402 [23/764] Linking target lib/librte_log.so.25.0 00:05:40.660 [24/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:40.660 [25/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore_var.c.o 00:05:40.660 [26/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:40.660 [27/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:40.660 [28/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:40.660 [29/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:40.660 [30/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:40.918 [31/764] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:40.918 [32/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:40.918 [33/764] Linking static target lib/librte_telemetry.a 00:05:40.918 [34/764] Generating symbol file lib/librte_log.so.25.0.p/librte_log.so.25.0.symbols 00:05:40.918 [35/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:40.918 [36/764] Linking target lib/librte_kvargs.so.25.0 00:05:40.918 [37/764] Linking target lib/librte_argparse.so.25.0 00:05:40.918 [38/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:40.918 [39/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:41.176 [40/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:41.176 [41/764] Generating symbol file lib/librte_kvargs.so.25.0.p/librte_kvargs.so.25.0.symbols 00:05:41.176 [42/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:41.176 [43/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:41.176 [44/764] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:41.176 [45/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:41.176 [46/764] Linking target lib/librte_telemetry.so.25.0 00:05:41.176 [47/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:41.176 [48/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:41.176 [49/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:41.176 [50/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:41.434 [51/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:41.434 [52/764] Generating symbol file lib/librte_telemetry.so.25.0.p/librte_telemetry.so.25.0.symbols 00:05:41.434 [53/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_bitset.c.o 00:05:41.434 [54/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:41.434 [55/764] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:41.692 [56/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:41.692 [57/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:41.692 [58/764] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:41.692 [59/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:41.692 [60/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:41.692 [61/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:41.692 [62/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:41.949 [63/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:41.949 [64/764] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:41.949 [65/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:41.949 [66/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:41.949 [67/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:41.949 [68/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:41.949 [69/764] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:42.206 [70/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:42.206 [71/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:42.206 [72/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:42.206 [73/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:42.206 [74/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:42.206 [75/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:42.206 [76/764] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:42.463 [77/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:42.463 [78/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:42.463 [79/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:42.463 [80/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:42.463 [81/764] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:42.464 [82/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:42.464 [83/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:42.464 [84/764] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:42.464 [85/764] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:42.721 [86/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:42.721 [87/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:42.721 [88/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:42.721 [89/764] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:42.721 [90/764] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:05:42.721 [91/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:42.979 [92/764] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:42.979 [93/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:42.979 [94/764] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:42.979 [95/764] Linking static target lib/librte_ring.a 00:05:42.979 [96/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:42.979 [97/764] Linking static target lib/librte_eal.a 00:05:43.237 [98/764] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:43.237 [99/764] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:43.237 [100/764] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:43.237 [101/764] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:43.237 [102/764] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:43.237 [103/764] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:43.495 [104/764] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:43.495 [105/764] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:43.495 [106/764] Linking static target lib/librte_mempool.a 00:05:43.495 [107/764] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:43.495 [108/764] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:43.495 [109/764] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:43.495 [110/764] Linking static target lib/librte_rcu.a 00:05:43.495 [111/764] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:43.752 [112/764] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:43.752 [113/764] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:43.752 [114/764] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:43.752 [115/764] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:05:43.752 [116/764] Linking static target lib/librte_net.a 00:05:43.752 [117/764] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:43.752 [118/764] Linking static target lib/librte_meter.a 00:05:44.009 [119/764] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:44.009 [120/764] Linking static target lib/librte_mbuf.a 00:05:44.009 [121/764] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:44.009 [122/764] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:44.009 [123/764] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:44.009 [124/764] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:44.009 [125/764] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:44.009 [126/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:44.009 [127/764] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:44.266 [128/764] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:44.524 [129/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:44.524 [130/764] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:44.781 [131/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:44.781 [132/764] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:45.039 [133/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:45.039 [134/764] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:45.039 [135/764] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:45.039 [136/764] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:45.039 [137/764] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:45.039 [138/764] Linking static target lib/librte_pci.a 00:05:45.039 [139/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:45.039 [140/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:45.039 [141/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:45.298 [142/764] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:45.298 [143/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:45.298 [144/764] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:45.298 [145/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:45.298 [146/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:45.298 [147/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:45.298 [148/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:45.298 [149/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:45.298 [150/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:45.298 [151/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:45.298 [152/764] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:45.557 [153/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:45.557 [154/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:45.557 [155/764] Linking static target lib/librte_cmdline.a 00:05:45.557 [156/764] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:05:45.557 [157/764] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:45.557 [158/764] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:05:45.557 [159/764] Linking static target lib/librte_metrics.a 00:05:45.814 [160/764] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:45.814 [161/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:45.814 [162/764] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:46.070 [163/764] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.070 [164/764] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gf2_poly_math.c.o 00:05:46.070 [165/764] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.327 [166/764] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:46.327 [167/764] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:46.327 [168/764] Linking static target lib/librte_timer.a 00:05:46.327 [169/764] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:05:46.584 [170/764] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:05:46.584 [171/764] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:05:46.584 [172/764] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:05:46.584 [173/764] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.841 [174/764] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:05:46.841 [175/764] Linking static target lib/librte_bitratestats.a 00:05:47.099 [176/764] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.099 [177/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:05:47.099 [178/764] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:05:47.099 [179/764] Linking static target lib/librte_bbdev.a 00:05:47.358 [180/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:05:47.358 [181/764] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:05:47.358 [182/764] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:47.358 [183/764] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.358 [184/764] Linking static target lib/librte_hash.a 00:05:47.358 [185/764] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.617 [186/764] Linking target lib/librte_eal.so.25.0 00:05:47.617 [187/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:05:47.617 [188/764] Generating symbol file lib/librte_eal.so.25.0.p/librte_eal.so.25.0.symbols 00:05:47.617 [189/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:05:47.617 [190/764] Linking target lib/librte_ring.so.25.0 00:05:47.617 [191/764] Linking target lib/librte_meter.so.25.0 00:05:47.617 [192/764] Generating symbol file lib/librte_ring.so.25.0.p/librte_ring.so.25.0.symbols 00:05:47.617 [193/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:47.875 [194/764] Linking target lib/librte_rcu.so.25.0 00:05:47.875 [195/764] Generating symbol file lib/librte_meter.so.25.0.p/librte_meter.so.25.0.symbols 00:05:47.875 [196/764] Linking target lib/librte_mempool.so.25.0 00:05:47.875 [197/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:05:47.875 [198/764] Linking static target lib/librte_ethdev.a 00:05:47.875 [199/764] Linking target lib/librte_pci.so.25.0 00:05:47.875 [200/764] Generating symbol file lib/librte_rcu.so.25.0.p/librte_rcu.so.25.0.symbols 00:05:47.875 [201/764] Generating symbol file lib/librte_mempool.so.25.0.p/librte_mempool.so.25.0.symbols 00:05:47.875 [202/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:05:47.875 [203/764] Linking target lib/librte_timer.so.25.0 00:05:47.875 [204/764] Linking target lib/librte_mbuf.so.25.0 00:05:47.875 [205/764] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:05:47.875 [206/764] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:05:47.875 [207/764] Linking static target lib/acl/libavx2_tmp.a 00:05:47.875 [208/764] Generating symbol file lib/librte_pci.so.25.0.p/librte_pci.so.25.0.symbols 00:05:47.875 [209/764] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:48.177 [210/764] Generating symbol file lib/librte_timer.so.25.0.p/librte_timer.so.25.0.symbols 00:05:48.177 [211/764] Generating symbol file lib/librte_mbuf.so.25.0.p/librte_mbuf.so.25.0.symbols 00:05:48.177 [212/764] Linking target lib/librte_net.so.25.0 00:05:48.177 [213/764] Generating symbol file lib/librte_net.so.25.0.p/librte_net.so.25.0.symbols 00:05:48.177 [214/764] Linking target lib/librte_cmdline.so.25.0 00:05:48.177 [215/764] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:05:48.177 [216/764] Linking target lib/librte_bbdev.so.25.0 00:05:48.177 [217/764] Linking target lib/librte_hash.so.25.0 00:05:48.177 [218/764] Linking static target lib/librte_cfgfile.a 00:05:48.177 [219/764] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:48.434 [220/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:05:48.434 [221/764] Generating symbol file lib/librte_hash.so.25.0.p/librte_hash.so.25.0.symbols 00:05:48.434 [222/764] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:48.434 [223/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:05:48.434 [224/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:05:48.434 [225/764] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:05:48.691 [226/764] Linking target lib/librte_cfgfile.so.25.0 00:05:48.691 [227/764] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:48.691 [228/764] Linking static target lib/librte_compressdev.a 00:05:48.691 [229/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:05:48.691 [230/764] Linking static target lib/librte_bpf.a 00:05:48.691 [231/764] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:48.948 [232/764] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:05:48.948 [233/764] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:48.948 [234/764] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:48.948 [235/764] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:48.948 [236/764] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:05:49.241 [237/764] Linking target lib/librte_compressdev.so.25.0 00:05:49.241 [238/764] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:05:49.241 [239/764] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:05:49.241 [240/764] Linking static target lib/librte_acl.a 00:05:49.241 [241/764] Linking static target lib/librte_distributor.a 00:05:49.241 [242/764] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:49.241 [243/764] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:05:49.498 [244/764] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:05:49.498 [245/764] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:05:49.498 [246/764] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:49.498 [247/764] Linking static target lib/librte_dmadev.a 00:05:49.498 [248/764] Linking target lib/librte_distributor.so.25.0 00:05:49.498 [249/764] Linking target lib/librte_acl.so.25.0 00:05:49.498 [250/764] Generating symbol file lib/librte_acl.so.25.0.p/librte_acl.so.25.0.symbols 00:05:49.756 [251/764] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:05:49.756 [252/764] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:49.756 [253/764] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:05:49.756 [254/764] Linking static target lib/librte_efd.a 00:05:49.756 [255/764] Linking target lib/librte_dmadev.so.25.0 00:05:50.016 [256/764] Generating symbol file lib/librte_dmadev.so.25.0.p/librte_dmadev.so.25.0.symbols 00:05:50.016 [257/764] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:50.016 [258/764] Linking static target lib/librte_cryptodev.a 00:05:50.016 [259/764] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:05:50.016 [260/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:05:50.016 [261/764] Linking target lib/librte_efd.so.25.0 00:05:50.016 [262/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:05:50.016 [263/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:05:50.582 [264/764] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:05:50.582 [265/764] Linking static target lib/librte_dispatcher.a 00:05:50.582 [266/764] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:05:50.582 [267/764] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:05:50.582 [268/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:05:50.582 [269/764] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:05:50.582 [270/764] Linking static target lib/librte_gpudev.a 00:05:50.841 [271/764] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:05:50.841 [272/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:05:50.841 [273/764] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:05:51.150 [274/764] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:51.150 [275/764] Linking target lib/librte_cryptodev.so.25.0 00:05:51.150 [276/764] Generating symbol file lib/librte_cryptodev.so.25.0.p/librte_cryptodev.so.25.0.symbols 00:05:51.150 [277/764] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:05:51.150 [278/764] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:05:51.409 [279/764] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:51.409 [280/764] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:05:51.409 [281/764] Linking target lib/librte_gpudev.so.25.0 00:05:51.409 [282/764] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:05:51.409 [283/764] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:05:51.409 [284/764] Linking static target lib/librte_gro.a 00:05:51.409 [285/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:05:51.409 [286/764] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:05:51.409 [287/764] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:05:51.409 [288/764] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:05:51.667 [289/764] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:05:51.667 [290/764] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:05:51.667 [291/764] Linking static target lib/librte_gso.a 00:05:51.667 [292/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:05:51.667 [293/764] Linking static target lib/librte_eventdev.a 00:05:51.667 [294/764] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:05:51.667 [295/764] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:05:51.925 [296/764] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:05:51.925 [297/764] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:51.925 [298/764] Linking target lib/librte_ethdev.so.25.0 00:05:51.925 [299/764] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:05:51.925 [300/764] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:05:51.925 [301/764] Linking static target lib/librte_jobstats.a 00:05:51.925 [302/764] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:05:51.925 [303/764] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:05:51.925 [304/764] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:05:51.925 [305/764] Generating symbol file lib/librte_ethdev.so.25.0.p/librte_ethdev.so.25.0.symbols 00:05:51.925 [306/764] Linking static target lib/librte_ip_frag.a 00:05:52.183 [307/764] Linking target lib/librte_metrics.so.25.0 00:05:52.183 [308/764] Linking target lib/librte_bpf.so.25.0 00:05:52.183 [309/764] Generating symbol file lib/librte_bpf.so.25.0.p/librte_bpf.so.25.0.symbols 00:05:52.183 [310/764] Generating symbol file lib/librte_metrics.so.25.0.p/librte_metrics.so.25.0.symbols 00:05:52.183 [311/764] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:05:52.183 [312/764] Linking target lib/librte_gro.so.25.0 00:05:52.183 [313/764] Linking target lib/librte_bitratestats.so.25.0 00:05:52.183 [314/764] Linking target lib/librte_gso.so.25.0 00:05:52.183 [315/764] Linking target lib/librte_jobstats.so.25.0 00:05:52.441 [316/764] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:05:52.441 [317/764] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:05:52.441 [318/764] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:05:52.441 [319/764] Linking static target lib/librte_latencystats.a 00:05:52.441 [320/764] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:52.441 [321/764] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:05:52.441 [322/764] Linking target lib/librte_ip_frag.so.25.0 00:05:52.441 [323/764] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:05:52.699 [324/764] Generating symbol file lib/librte_ip_frag.so.25.0.p/librte_ip_frag.so.25.0.symbols 00:05:52.699 [325/764] Linking target lib/librte_latencystats.so.25.0 00:05:52.699 [326/764] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:05:52.699 [327/764] Compiling C object lib/librte_power.a.p/power_rte_power_qos.c.o 00:05:52.699 [328/764] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:05:52.699 [329/764] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:05:52.699 [330/764] Linking static target lib/librte_lpm.a 00:05:52.699 [331/764] Compiling C object lib/librte_power.a.p/power_rte_power_cpufreq.c.o 00:05:52.957 [332/764] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:52.957 [333/764] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:05:52.957 [334/764] Linking target lib/librte_lpm.so.25.0 00:05:52.957 [335/764] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:05:52.957 [336/764] Linking static target lib/librte_pcapng.a 00:05:52.957 [337/764] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:05:53.215 [338/764] Generating symbol file lib/librte_lpm.so.25.0.p/librte_lpm.so.25.0.symbols 00:05:53.215 [339/764] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:05:53.215 [340/764] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:53.215 [341/764] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:05:53.215 [342/764] Linking static target lib/librte_power.a 00:05:53.215 [343/764] Linking static target lib/librte_rawdev.a 00:05:53.215 [344/764] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:05:53.215 [345/764] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:05:53.215 [346/764] Linking static target lib/librte_regexdev.a 00:05:53.215 [347/764] Linking target lib/librte_pcapng.so.25.0 00:05:53.215 [348/764] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:05:53.215 [349/764] Generating symbol file lib/librte_pcapng.so.25.0.p/librte_pcapng.so.25.0.symbols 00:05:53.473 [350/764] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:05:53.473 [351/764] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:05:53.473 [352/764] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:53.473 [353/764] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:53.473 [354/764] Linking target lib/librte_eventdev.so.25.0 00:05:53.473 [355/764] Linking target lib/librte_rawdev.so.25.0 00:05:53.473 [356/764] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:05:53.473 [357/764] Linking static target lib/librte_mldev.a 00:05:53.731 [358/764] Generating symbol file lib/librte_eventdev.so.25.0.p/librte_eventdev.so.25.0.symbols 00:05:53.731 [359/764] Linking target lib/librte_dispatcher.so.25.0 00:05:53.731 [360/764] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:05:53.731 [361/764] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:53.731 [362/764] Linking target lib/librte_power.so.25.0 00:05:53.731 [363/764] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:53.731 [364/764] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:05:53.731 [365/764] Linking target lib/librte_regexdev.so.25.0 00:05:53.989 [366/764] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:05:53.990 [367/764] Generating symbol file lib/librte_power.so.25.0.p/librte_power.so.25.0.symbols 00:05:53.990 [368/764] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:05:53.990 [369/764] Linking static target lib/librte_member.a 00:05:53.990 [370/764] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:05:53.990 [371/764] Linking static target lib/librte_rib.a 00:05:53.990 [372/764] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:05:53.990 [373/764] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:53.990 [374/764] Linking static target lib/librte_reorder.a 00:05:53.990 [375/764] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:05:53.990 [376/764] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:05:54.249 [377/764] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:05:54.249 [378/764] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:05:54.249 [379/764] Linking static target lib/librte_stack.a 00:05:54.249 [380/764] Linking target lib/librte_member.so.25.0 00:05:54.249 [381/764] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:54.249 [382/764] Linking static target lib/librte_security.a 00:05:54.249 [383/764] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:54.249 [384/764] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:05:54.249 [385/764] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:54.249 [386/764] Linking target lib/librte_reorder.so.25.0 00:05:54.249 [387/764] Linking target lib/librte_rib.so.25.0 00:05:54.249 [388/764] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:05:54.508 [389/764] Linking target lib/librte_stack.so.25.0 00:05:54.508 [390/764] Generating symbol file lib/librte_reorder.so.25.0.p/librte_reorder.so.25.0.symbols 00:05:54.508 [391/764] Generating symbol file lib/librte_rib.so.25.0.p/librte_rib.so.25.0.symbols 00:05:54.508 [392/764] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:54.508 [393/764] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:54.508 [394/764] Linking target lib/librte_security.so.25.0 00:05:54.508 [395/764] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:54.508 [396/764] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:05:54.767 [397/764] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:54.767 [398/764] Linking static target lib/librte_sched.a 00:05:54.767 [399/764] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:54.767 [400/764] Generating symbol file lib/librte_security.so.25.0.p/librte_security.so.25.0.symbols 00:05:54.767 [401/764] Linking target lib/librte_mldev.so.25.0 00:05:54.767 [402/764] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:05:55.062 [403/764] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:55.062 [404/764] Linking target lib/librte_sched.so.25.0 00:05:55.062 [405/764] Generating symbol file lib/librte_sched.so.25.0.p/librte_sched.so.25.0.symbols 00:05:55.062 [406/764] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:05:55.062 [407/764] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:05:55.321 [408/764] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:55.321 [409/764] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:55.321 [410/764] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:05:55.579 [411/764] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:55.579 [412/764] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:05:55.579 [413/764] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:05:55.579 [414/764] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:05:55.837 [415/764] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:05:55.837 [416/764] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:05:55.837 [417/764] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:05:56.243 [418/764] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:05:56.243 [419/764] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:05:56.243 [420/764] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:05:56.243 [421/764] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:05:56.243 [422/764] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:05:56.243 [423/764] Linking static target lib/librte_ipsec.a 00:05:56.502 [424/764] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:05:56.502 [425/764] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:05:56.502 [426/764] Linking target lib/librte_ipsec.so.25.0 00:05:56.502 [427/764] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:05:56.502 [428/764] Generating symbol file lib/librte_ipsec.so.25.0.p/librte_ipsec.so.25.0.symbols 00:05:56.502 [429/764] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:05:56.760 [430/764] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:05:56.760 [431/764] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:05:56.760 [432/764] Linking static target lib/librte_pdcp.a 00:05:56.760 [433/764] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:05:56.760 [434/764] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:05:56.760 [435/764] Linking static target lib/librte_fib.a 00:05:57.020 [436/764] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:05:57.020 [437/764] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:05:57.020 [438/764] Linking target lib/librte_pdcp.so.25.0 00:05:57.020 [439/764] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:05:57.020 [440/764] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:05:57.020 [441/764] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:05:57.020 [442/764] Linking target lib/librte_fib.so.25.0 00:05:57.279 [443/764] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:05:57.538 [444/764] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:05:57.538 [445/764] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:05:57.538 [446/764] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:05:57.538 [447/764] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:05:57.538 [448/764] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:05:57.538 [449/764] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:05:57.796 [450/764] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:05:57.796 [451/764] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:05:57.796 [452/764] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:05:57.796 [453/764] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:05:58.056 [454/764] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:05:58.056 [455/764] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:05:58.056 [456/764] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:05:58.056 [457/764] Linking static target lib/librte_pdump.a 00:05:58.056 [458/764] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:05:58.315 [459/764] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:05:58.315 [460/764] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:05:58.315 [461/764] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:05:58.315 [462/764] Linking static target lib/librte_port.a 00:05:58.315 [463/764] Linking target lib/librte_pdump.so.25.0 00:05:58.573 [464/764] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:05:58.573 [465/764] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:05:58.573 [466/764] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:05:58.573 [467/764] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:05:58.573 [468/764] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:58.573 [469/764] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:05:58.573 [470/764] Linking target lib/librte_port.so.25.0 00:05:58.573 [471/764] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:05:58.831 [472/764] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:05:58.831 [473/764] Generating symbol file lib/librte_port.so.25.0.p/librte_port.so.25.0.symbols 00:05:58.831 [474/764] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:05:58.831 [475/764] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:05:58.831 [476/764] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:05:58.831 [477/764] Linking static target lib/librte_table.a 00:05:58.831 [478/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:05:59.089 [479/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:05:59.089 [480/764] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:05:59.347 [481/764] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:05:59.347 [482/764] Linking target lib/librte_table.so.25.0 00:05:59.347 [483/764] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:05:59.347 [484/764] Generating symbol file lib/librte_table.so.25.0.p/librte_table.so.25.0.symbols 00:05:59.347 [485/764] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:05:59.605 [486/764] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:05:59.605 [487/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:05:59.605 [488/764] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:05:59.605 [489/764] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:05:59.862 [490/764] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:05:59.862 [491/764] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:05:59.862 [492/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:06:00.120 [493/764] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:06:00.120 [494/764] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:06:00.120 [495/764] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:06:00.120 [496/764] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:06:00.120 [497/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:06:00.120 [498/764] Linking static target lib/librte_graph.a 00:06:00.120 [499/764] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:06:00.378 [500/764] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:06:00.636 [501/764] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:06:00.636 [502/764] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:06:00.636 [503/764] Linking target lib/librte_graph.so.25.0 00:06:00.636 [504/764] Generating symbol file lib/librte_graph.so.25.0.p/librte_graph.so.25.0.symbols 00:06:00.636 [505/764] Compiling C object lib/librte_node.a.p/node_null.c.o 00:06:00.636 [506/764] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:06:00.636 [507/764] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:06:00.894 [508/764] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:06:00.894 [509/764] Compiling C object lib/librte_node.a.p/node_log.c.o 00:06:00.894 [510/764] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:06:00.894 [511/764] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:06:00.894 [512/764] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:06:00.894 [513/764] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:06:01.152 [514/764] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:06:01.152 [515/764] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:06:01.152 [516/764] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:06:01.152 [517/764] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:06:01.152 [518/764] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:06:01.152 [519/764] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:06:01.152 [520/764] Linking static target lib/librte_node.a 00:06:01.152 [521/764] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:06:01.410 [522/764] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:06:01.410 [523/764] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:06:01.410 [524/764] Linking static target drivers/libtmp_rte_bus_pci.a 00:06:01.410 [525/764] Linking target lib/librte_node.so.25.0 00:06:01.410 [526/764] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:06:01.410 [527/764] Linking static target drivers/libtmp_rte_bus_vdev.a 00:06:01.668 [528/764] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:06:01.668 [529/764] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:01.668 [530/764] Linking static target drivers/librte_bus_pci.a 00:06:01.668 [531/764] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:06:01.668 [532/764] Compiling C object drivers/librte_bus_pci.so.25.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:01.668 [533/764] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:01.668 [534/764] Linking static target drivers/librte_bus_vdev.a 00:06:01.668 [535/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:06:01.668 [536/764] Compiling C object drivers/librte_bus_vdev.so.25.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:01.668 [537/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:06:01.668 [538/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:06:01.926 [539/764] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:01.926 [540/764] Linking target drivers/librte_bus_vdev.so.25.0 00:06:01.926 [541/764] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:06:01.926 [542/764] Linking static target drivers/libtmp_rte_mempool_ring.a 00:06:01.926 [543/764] Generating symbol file drivers/librte_bus_vdev.so.25.0.p/librte_bus_vdev.so.25.0.symbols 00:06:01.926 [544/764] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:01.926 [545/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:06:01.926 [546/764] Linking target drivers/librte_bus_pci.so.25.0 00:06:02.185 [547/764] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:06:02.185 [548/764] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:02.185 [549/764] Linking static target drivers/librte_mempool_ring.a 00:06:02.185 [550/764] Generating symbol file drivers/librte_bus_pci.so.25.0.p/librte_bus_pci.so.25.0.symbols 00:06:02.185 [551/764] Compiling C object drivers/librte_mempool_ring.so.25.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:02.185 [552/764] Linking target drivers/librte_mempool_ring.so.25.0 00:06:02.185 [553/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:06:02.444 [554/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:06:02.702 [555/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:06:02.702 [556/764] Linking static target drivers/net/i40e/base/libi40e_base.a 00:06:02.959 [557/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:06:03.217 [558/764] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:06:03.217 [559/764] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:06:03.217 [560/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:06:03.474 [561/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:06:03.474 [562/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:06:03.732 [563/764] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:06:03.732 [564/764] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:06:03.732 [565/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:06:03.732 [566/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:06:03.989 [567/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:06:03.989 [568/764] Compiling C object drivers/libtmp_rte_power_acpi.a.p/power_acpi_acpi_cpufreq.c.o 00:06:03.989 [569/764] Linking static target drivers/libtmp_rte_power_acpi.a 00:06:04.248 [570/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:06:04.248 [571/764] Generating drivers/rte_power_acpi.pmd.c with a custom command 00:06:04.248 [572/764] Compiling C object drivers/librte_power_acpi.a.p/meson-generated_.._rte_power_acpi.pmd.c.o 00:06:04.248 [573/764] Linking static target drivers/librte_power_acpi.a 00:06:04.248 [574/764] Compiling C object drivers/librte_power_acpi.so.25.0.p/meson-generated_.._rte_power_acpi.pmd.c.o 00:06:04.248 [575/764] Compiling C object drivers/libtmp_rte_power_amd_pstate.a.p/power_amd_pstate_amd_pstate_cpufreq.c.o 00:06:04.248 [576/764] Linking static target drivers/libtmp_rte_power_amd_pstate.a 00:06:04.248 [577/764] Compiling C object drivers/libtmp_rte_power_cppc.a.p/power_cppc_cppc_cpufreq.c.o 00:06:04.248 [578/764] Linking static target drivers/libtmp_rte_power_cppc.a 00:06:04.248 [579/764] Linking target drivers/librte_power_acpi.so.25.0 00:06:04.505 [580/764] Generating drivers/rte_power_amd_pstate.pmd.c with a custom command 00:06:04.505 [581/764] Generating drivers/rte_power_cppc.pmd.c with a custom command 00:06:04.505 [582/764] Compiling C object drivers/librte_power_amd_pstate.a.p/meson-generated_.._rte_power_amd_pstate.pmd.c.o 00:06:04.505 [583/764] Linking static target drivers/librte_power_amd_pstate.a 00:06:04.505 [584/764] Compiling C object drivers/librte_power_amd_pstate.so.25.0.p/meson-generated_.._rte_power_amd_pstate.pmd.c.o 00:06:04.505 [585/764] Compiling C object drivers/librte_power_cppc.a.p/meson-generated_.._rte_power_cppc.pmd.c.o 00:06:04.505 [586/764] Linking static target drivers/librte_power_cppc.a 00:06:04.505 [587/764] Compiling C object drivers/librte_power_cppc.so.25.0.p/meson-generated_.._rte_power_cppc.pmd.c.o 00:06:04.506 [588/764] Linking target drivers/librte_power_amd_pstate.so.25.0 00:06:04.506 [589/764] Linking target drivers/librte_power_cppc.so.25.0 00:06:04.506 [590/764] Compiling C object drivers/libtmp_rte_power_kvm_vm.a.p/power_kvm_vm_guest_channel.c.o 00:06:04.506 [591/764] Compiling C object drivers/libtmp_rte_power_kvm_vm.a.p/power_kvm_vm_kvm_vm.c.o 00:06:04.506 [592/764] Linking static target drivers/libtmp_rte_power_kvm_vm.a 00:06:04.764 [593/764] Generating drivers/rte_power_kvm_vm.pmd.c with a custom command 00:06:04.764 [594/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:06:04.764 [595/764] Compiling C object drivers/librte_power_kvm_vm.a.p/meson-generated_.._rte_power_kvm_vm.pmd.c.o 00:06:04.764 [596/764] Linking static target drivers/librte_power_kvm_vm.a 00:06:04.764 [597/764] Compiling C object drivers/librte_power_kvm_vm.so.25.0.p/meson-generated_.._rte_power_kvm_vm.pmd.c.o 00:06:04.764 [598/764] Compiling C object drivers/libtmp_rte_power_intel_uncore.a.p/power_intel_uncore_intel_uncore.c.o 00:06:04.764 [599/764] Linking static target drivers/libtmp_rte_power_intel_uncore.a 00:06:04.764 [600/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:06:04.764 [601/764] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:06:04.764 [602/764] Generating drivers/rte_power_kvm_vm.sym_chk with a custom command (wrapped by meson to capture output) 00:06:04.764 [603/764] Linking target drivers/librte_power_kvm_vm.so.25.0 00:06:05.022 [604/764] Generating drivers/rte_power_intel_uncore.pmd.c with a custom command 00:06:05.022 [605/764] Compiling C object drivers/librte_power_intel_uncore.a.p/meson-generated_.._rte_power_intel_uncore.pmd.c.o 00:06:05.023 [606/764] Linking static target drivers/librte_power_intel_uncore.a 00:06:05.023 [607/764] Compiling C object drivers/libtmp_rte_power_intel_pstate.a.p/power_intel_pstate_intel_pstate_cpufreq.c.o 00:06:05.023 [608/764] Linking static target drivers/libtmp_rte_power_intel_pstate.a 00:06:05.023 [609/764] Compiling C object drivers/librte_power_intel_uncore.so.25.0.p/meson-generated_.._rte_power_intel_uncore.pmd.c.o 00:06:05.023 [610/764] Linking target drivers/librte_power_intel_uncore.so.25.0 00:06:05.023 [611/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:06:05.023 [612/764] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:06:05.023 [613/764] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:06:05.023 [614/764] Generating drivers/rte_power_intel_pstate.pmd.c with a custom command 00:06:05.023 [615/764] Compiling C object drivers/librte_power_intel_pstate.a.p/meson-generated_.._rte_power_intel_pstate.pmd.c.o 00:06:05.023 [616/764] Linking static target drivers/librte_power_intel_pstate.a 00:06:05.023 [617/764] Compiling C object drivers/librte_power_intel_pstate.so.25.0.p/meson-generated_.._rte_power_intel_pstate.pmd.c.o 00:06:05.281 [618/764] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:06:05.281 [619/764] Linking target drivers/librte_power_intel_pstate.so.25.0 00:06:05.281 [620/764] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:06:05.281 [621/764] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:06:05.539 [622/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:06:05.539 [623/764] Linking static target drivers/libtmp_rte_net_i40e.a 00:06:05.539 [624/764] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:06:05.539 [625/764] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:06:05.539 [626/764] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:06:05.539 [627/764] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:06:05.539 [628/764] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:06:05.539 [629/764] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:06:05.539 [630/764] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:06:05.797 [631/764] Linking static target drivers/librte_net_i40e.a 00:06:05.797 [632/764] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:06:05.797 [633/764] Compiling C object drivers/librte_net_i40e.so.25.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:06:05.797 [634/764] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:06:05.797 [635/764] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:06:05.797 [636/764] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:06:05.797 [637/764] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:06:05.797 [638/764] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:06:05.797 [639/764] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:06:06.063 [640/764] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:06:06.063 [641/764] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:06:06.323 [642/764] Linking target drivers/librte_net_i40e.so.25.0 00:06:06.323 [643/764] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:06:06.323 [644/764] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:06:06.581 [645/764] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:06:06.581 [646/764] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:06:06.839 [647/764] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:06:06.839 [648/764] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:06:06.839 [649/764] Linking static target lib/librte_vhost.a 00:06:06.839 [650/764] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:06:06.839 [651/764] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:06:06.839 [652/764] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:06:06.839 [653/764] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:06:07.096 [654/764] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:06:07.096 [655/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:06:07.096 [656/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:06:07.355 [657/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:06:07.355 [658/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:06:07.355 [659/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:06:07.355 [660/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:06:07.355 [661/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:06:07.612 [662/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:06:07.612 [663/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:06:07.612 [664/764] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:06:07.612 [665/764] Linking target lib/librte_vhost.so.25.0 00:06:07.612 [666/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:06:07.871 [667/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:06:07.871 [668/764] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:06:07.871 [669/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:06:07.871 [670/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:06:07.871 [671/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:06:08.130 [672/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:06:08.130 [673/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:06:08.130 [674/764] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:06:08.388 [675/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:06:08.647 [676/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:06:08.647 [677/764] Linking static target lib/librte_pipeline.a 00:06:08.905 [678/764] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:06:08.905 [679/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:06:08.905 [680/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:06:08.905 [681/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:06:08.905 [682/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:06:08.905 [683/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:06:08.905 [684/764] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:06:09.163 [685/764] Linking target app/dpdk-dumpcap 00:06:09.163 [686/764] Linking target app/dpdk-pdump 00:06:09.163 [687/764] Linking target app/dpdk-graph 00:06:09.163 [688/764] Linking target app/dpdk-test-cmdline 00:06:09.163 [689/764] Linking target app/dpdk-proc-info 00:06:09.163 [690/764] Linking target app/dpdk-test-compress-perf 00:06:09.422 [691/764] Linking target app/dpdk-test-acl 00:06:09.422 [692/764] Linking target app/dpdk-test-crypto-perf 00:06:09.422 [693/764] Linking target app/dpdk-test-dma-perf 00:06:09.422 [694/764] Linking target app/dpdk-test-fib 00:06:09.422 [695/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:06:09.422 [696/764] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:06:09.682 [697/764] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:06:09.682 [698/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:06:09.682 [699/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:06:09.682 [700/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:06:09.682 [701/764] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:06:09.939 [702/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:06:09.939 [703/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:06:09.939 [704/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:06:09.939 [705/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:06:09.939 [706/764] Linking target app/dpdk-test-gpudev 00:06:10.197 [707/764] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:06:10.197 [708/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:06:10.197 [709/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:06:10.197 [710/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:06:10.197 [711/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:06:10.197 [712/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:06:10.456 [713/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:06:10.456 [714/764] Linking target app/dpdk-test-flow-perf 00:06:10.456 [715/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:06:10.456 [716/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:06:10.456 [717/764] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:06:10.456 [718/764] Linking target app/dpdk-test-eventdev 00:06:10.456 [719/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:06:10.751 [720/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:06:10.751 [721/764] Linking target lib/librte_pipeline.so.25.0 00:06:10.751 [722/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:06:10.751 [723/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:06:10.751 [724/764] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:06:10.751 [725/764] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:06:11.009 [726/764] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:06:11.009 [727/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:06:11.009 [728/764] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:06:11.009 [729/764] Linking target app/dpdk-test-bbdev 00:06:11.009 [730/764] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:06:11.268 [731/764] Linking target app/dpdk-test-pipeline 00:06:11.268 [732/764] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:06:11.268 [733/764] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:06:11.527 [734/764] Compiling C object app/dpdk-testpmd.p/test-pmd_hairpin.c.o 00:06:11.527 [735/764] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:06:11.527 [736/764] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:06:11.527 [737/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:06:11.786 [738/764] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:06:11.786 [739/764] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:06:11.786 [740/764] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:06:12.044 [741/764] Linking target app/dpdk-test-mldev 00:06:12.044 [742/764] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:06:12.044 [743/764] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:06:12.044 [744/764] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:06:12.044 [745/764] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:06:12.044 [746/764] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:06:12.303 [747/764] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:06:12.303 [748/764] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:06:12.561 [749/764] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:06:12.561 [750/764] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:06:12.561 [751/764] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:06:12.820 [752/764] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:06:12.820 [753/764] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:06:12.820 [754/764] Linking target app/dpdk-test-sad 00:06:12.820 [755/764] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:06:13.077 [756/764] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:06:13.077 [757/764] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:06:13.077 [758/764] Linking target app/dpdk-test-regex 00:06:13.077 [759/764] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:06:13.334 [760/764] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:06:13.334 [761/764] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:06:13.334 [762/764] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:06:13.592 [763/764] Linking target app/dpdk-test-security-perf 00:06:13.592 [764/764] Linking target app/dpdk-testpmd 00:06:13.592 06:08:33 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:06:13.592 06:08:33 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:06:13.592 06:08:33 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:06:13.850 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:06:13.850 [0/1] Installing files. 00:06:14.111 Installing subdir /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/counters.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/cpu.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/memory.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:06:14.111 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_eddsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_skeleton.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_gre.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_gre.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_ipv4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_ipv4.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_mpls.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_mpls.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:14.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:14.113 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.114 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:14.115 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:14.116 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:14.116 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:06:14.116 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:06:14.116 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:06:14.116 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:06:14.116 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:06:14.116 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:06:14.116 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:06:14.116 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:06:14.116 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:06:14.116 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:06:14.116 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_argparse.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.116 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.379 Installing lib/librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.379 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.379 Installing lib/librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.379 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.379 Installing lib/librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.379 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.379 Installing lib/librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.380 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.380 Installing lib/librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.380 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.380 Installing lib/librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.380 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.380 Installing lib/librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.380 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.380 Installing lib/librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.380 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.380 Installing lib/librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.380 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.380 Installing lib/librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.380 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.380 Installing lib/librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.380 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.380 Installing lib/librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.380 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.380 Installing lib/librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.380 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.380 Installing lib/librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.380 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.380 Installing lib/librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.380 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.380 Installing lib/librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.380 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.380 Installing lib/librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.380 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.380 Installing drivers/librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:06:14.380 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.380 Installing drivers/librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:06:14.380 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.380 Installing drivers/librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:06:14.380 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.380 Installing drivers/librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:06:14.380 Installing drivers/librte_power_acpi.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.380 Installing drivers/librte_power_acpi.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:06:14.380 Installing drivers/librte_power_amd_pstate.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.380 Installing drivers/librte_power_amd_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:06:14.380 Installing drivers/librte_power_cppc.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.380 Installing drivers/librte_power_cppc.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:06:14.380 Installing drivers/librte_power_intel_pstate.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.380 Installing drivers/librte_power_intel_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:06:14.380 Installing drivers/librte_power_intel_uncore.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.380 Installing drivers/librte_power_intel_uncore.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:06:14.380 Installing drivers/librte_power_kvm_vm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.380 Installing drivers/librte_power_kvm_vm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:06:14.380 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:14.380 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:14.380 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:14.380 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:14.380 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:14.380 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:14.380 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:14.380 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:14.380 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:14.380 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:14.380 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:14.380 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:14.380 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:14.380 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:14.380 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:14.380 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:14.380 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:14.380 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:14.380 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:14.380 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/argparse/rte_argparse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitset.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.380 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore_var.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/ptr_compress/rte_ptr_compress.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_cksum.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip4.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.381 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/power/power_cpufreq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/power/power_uncore_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_cpufreq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_qos.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.382 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/drivers/power/kvm_vm/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry-exporter.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:06:14.383 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:06:14.383 Installing symlink pointing to librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.25 00:06:14.383 Installing symlink pointing to librte_log.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:06:14.383 Installing symlink pointing to librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.25 00:06:14.383 Installing symlink pointing to librte_kvargs.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:06:14.383 Installing symlink pointing to librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so.25 00:06:14.383 Installing symlink pointing to librte_argparse.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so 00:06:14.383 Installing symlink pointing to librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.25 00:06:14.383 Installing symlink pointing to librte_telemetry.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:06:14.383 Installing symlink pointing to librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.25 00:06:14.383 Installing symlink pointing to librte_eal.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:06:14.383 Installing symlink pointing to librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.25 00:06:14.383 Installing symlink pointing to librte_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:06:14.383 Installing symlink pointing to librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.25 00:06:14.383 Installing symlink pointing to librte_rcu.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:06:14.383 Installing symlink pointing to librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.25 00:06:14.383 Installing symlink pointing to librte_mempool.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:06:14.383 Installing symlink pointing to librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.25 00:06:14.383 Installing symlink pointing to librte_mbuf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:06:14.383 Installing symlink pointing to librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.25 00:06:14.383 Installing symlink pointing to librte_net.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:06:14.383 Installing symlink pointing to librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.25 00:06:14.383 Installing symlink pointing to librte_meter.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:06:14.383 Installing symlink pointing to librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.25 00:06:14.383 Installing symlink pointing to librte_ethdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:06:14.383 Installing symlink pointing to librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.25 00:06:14.383 Installing symlink pointing to librte_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:06:14.383 Installing symlink pointing to librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.25 00:06:14.383 Installing symlink pointing to librte_cmdline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:06:14.383 Installing symlink pointing to librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.25 00:06:14.383 Installing symlink pointing to librte_metrics.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:06:14.383 Installing symlink pointing to librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.25 00:06:14.384 Installing symlink pointing to librte_hash.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:06:14.384 Installing symlink pointing to librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.25 00:06:14.384 Installing symlink pointing to librte_timer.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:06:14.384 Installing symlink pointing to librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.25 00:06:14.384 Installing symlink pointing to librte_acl.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:06:14.384 Installing symlink pointing to librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.25 00:06:14.384 Installing symlink pointing to librte_bbdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:06:14.384 Installing symlink pointing to librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.25 00:06:14.384 Installing symlink pointing to librte_bitratestats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:06:14.384 Installing symlink pointing to librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.25 00:06:14.384 Installing symlink pointing to librte_bpf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:06:14.384 Installing symlink pointing to librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.25 00:06:14.384 Installing symlink pointing to librte_cfgfile.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:06:14.384 Installing symlink pointing to librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.25 00:06:14.384 Installing symlink pointing to librte_compressdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:06:14.384 Installing symlink pointing to librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.25 00:06:14.384 Installing symlink pointing to librte_cryptodev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:06:14.384 Installing symlink pointing to librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.25 00:06:14.384 Installing symlink pointing to librte_distributor.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:06:14.384 Installing symlink pointing to librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.25 00:06:14.384 Installing symlink pointing to librte_dmadev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:06:14.384 Installing symlink pointing to librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.25 00:06:14.384 Installing symlink pointing to librte_efd.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:06:14.384 Installing symlink pointing to librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.25 00:06:14.384 Installing symlink pointing to librte_eventdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:06:14.384 Installing symlink pointing to librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.25 00:06:14.384 Installing symlink pointing to librte_dispatcher.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:06:14.384 Installing symlink pointing to librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.25 00:06:14.384 Installing symlink pointing to librte_gpudev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:06:14.384 Installing symlink pointing to librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.25 00:06:14.384 Installing symlink pointing to librte_gro.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:06:14.384 Installing symlink pointing to librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.25 00:06:14.384 Installing symlink pointing to librte_gso.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:06:14.384 Installing symlink pointing to librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.25 00:06:14.384 Installing symlink pointing to librte_ip_frag.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:06:14.384 Installing symlink pointing to librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.25 00:06:14.384 Installing symlink pointing to librte_jobstats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:06:14.384 Installing symlink pointing to librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.25 00:06:14.384 Installing symlink pointing to librte_latencystats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:06:14.384 Installing symlink pointing to librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.25 00:06:14.384 Installing symlink pointing to librte_lpm.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:06:14.384 Installing symlink pointing to librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.25 00:06:14.384 Installing symlink pointing to librte_member.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:06:14.384 Installing symlink pointing to librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.25 00:06:14.384 Installing symlink pointing to librte_pcapng.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:06:14.384 Installing symlink pointing to librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.25 00:06:14.384 Installing symlink pointing to librte_power.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:06:14.384 Installing symlink pointing to librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.25 00:06:14.384 Installing symlink pointing to librte_rawdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:06:14.384 Installing symlink pointing to librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.25 00:06:14.384 Installing symlink pointing to librte_regexdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:06:14.384 Installing symlink pointing to librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.25 00:06:14.384 Installing symlink pointing to librte_mldev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:06:14.384 Installing symlink pointing to librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.25 00:06:14.384 Installing symlink pointing to librte_rib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:06:14.384 Installing symlink pointing to librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.25 00:06:14.384 Installing symlink pointing to librte_reorder.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:06:14.384 Installing symlink pointing to librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.25 00:06:14.384 Installing symlink pointing to librte_sched.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:06:14.384 Installing symlink pointing to librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.25 00:06:14.384 Installing symlink pointing to librte_security.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:06:14.384 Installing symlink pointing to librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.25 00:06:14.384 Installing symlink pointing to librte_stack.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:06:14.384 Installing symlink pointing to librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.25 00:06:14.384 Installing symlink pointing to librte_vhost.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:06:14.384 Installing symlink pointing to librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.25 00:06:14.384 Installing symlink pointing to librte_ipsec.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:06:14.384 Installing symlink pointing to librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.25 00:06:14.384 Installing symlink pointing to librte_pdcp.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:06:14.384 Installing symlink pointing to librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.25 00:06:14.384 Installing symlink pointing to librte_fib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:06:14.384 Installing symlink pointing to librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.25 00:06:14.384 Installing symlink pointing to librte_port.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:06:14.384 Installing symlink pointing to librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.25 00:06:14.384 Installing symlink pointing to librte_pdump.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:06:14.384 Installing symlink pointing to librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.25 00:06:14.384 Installing symlink pointing to librte_table.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:06:14.384 Installing symlink pointing to librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.25 00:06:14.384 Installing symlink pointing to librte_pipeline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:06:14.384 Installing symlink pointing to librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.25 00:06:14.384 Installing symlink pointing to librte_graph.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:06:14.384 Installing symlink pointing to librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.25 00:06:14.384 Installing symlink pointing to librte_node.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:06:14.384 Installing symlink pointing to librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25 00:06:14.384 Installing symlink pointing to librte_bus_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:06:14.384 Installing symlink pointing to librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25 00:06:14.384 Installing symlink pointing to librte_bus_vdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:06:14.385 Installing symlink pointing to librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25 00:06:14.385 './librte_bus_pci.so' -> 'dpdk/pmds-25.0/librte_bus_pci.so' 00:06:14.385 './librte_bus_pci.so.25' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25' 00:06:14.385 './librte_bus_pci.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25.0' 00:06:14.385 './librte_bus_vdev.so' -> 'dpdk/pmds-25.0/librte_bus_vdev.so' 00:06:14.385 './librte_bus_vdev.so.25' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25' 00:06:14.385 './librte_bus_vdev.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25.0' 00:06:14.385 './librte_mempool_ring.so' -> 'dpdk/pmds-25.0/librte_mempool_ring.so' 00:06:14.385 './librte_mempool_ring.so.25' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25' 00:06:14.385 './librte_mempool_ring.so.25.0' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25.0' 00:06:14.385 './librte_net_i40e.so' -> 'dpdk/pmds-25.0/librte_net_i40e.so' 00:06:14.385 './librte_net_i40e.so.25' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25' 00:06:14.385 './librte_net_i40e.so.25.0' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25.0' 00:06:14.385 './librte_power_acpi.so' -> 'dpdk/pmds-25.0/librte_power_acpi.so' 00:06:14.385 './librte_power_acpi.so.25' -> 'dpdk/pmds-25.0/librte_power_acpi.so.25' 00:06:14.385 './librte_power_acpi.so.25.0' -> 'dpdk/pmds-25.0/librte_power_acpi.so.25.0' 00:06:14.385 './librte_power_amd_pstate.so' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so' 00:06:14.385 './librte_power_amd_pstate.so.25' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so.25' 00:06:14.385 './librte_power_amd_pstate.so.25.0' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so.25.0' 00:06:14.385 './librte_power_cppc.so' -> 'dpdk/pmds-25.0/librte_power_cppc.so' 00:06:14.385 './librte_power_cppc.so.25' -> 'dpdk/pmds-25.0/librte_power_cppc.so.25' 00:06:14.385 './librte_power_cppc.so.25.0' -> 'dpdk/pmds-25.0/librte_power_cppc.so.25.0' 00:06:14.385 './librte_power_intel_pstate.so' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so' 00:06:14.385 './librte_power_intel_pstate.so.25' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so.25' 00:06:14.385 './librte_power_intel_pstate.so.25.0' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so.25.0' 00:06:14.385 './librte_power_intel_uncore.so' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so' 00:06:14.385 './librte_power_intel_uncore.so.25' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so.25' 00:06:14.385 './librte_power_intel_uncore.so.25.0' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so.25.0' 00:06:14.385 './librte_power_kvm_vm.so' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so' 00:06:14.385 './librte_power_kvm_vm.so.25' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so.25' 00:06:14.385 './librte_power_kvm_vm.so.25.0' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so.25.0' 00:06:14.385 Installing symlink pointing to librte_mempool_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:06:14.385 Installing symlink pointing to librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25 00:06:14.385 Installing symlink pointing to librte_net_i40e.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:06:14.385 Installing symlink pointing to librte_power_acpi.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so.25 00:06:14.385 Installing symlink pointing to librte_power_acpi.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so 00:06:14.385 Installing symlink pointing to librte_power_amd_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so.25 00:06:14.385 Installing symlink pointing to librte_power_amd_pstate.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so 00:06:14.385 Installing symlink pointing to librte_power_cppc.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so.25 00:06:14.385 Installing symlink pointing to librte_power_cppc.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so 00:06:14.385 Installing symlink pointing to librte_power_intel_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so.25 00:06:14.385 Installing symlink pointing to librte_power_intel_pstate.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so 00:06:14.385 Installing symlink pointing to librte_power_intel_uncore.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so.25 00:06:14.385 Installing symlink pointing to librte_power_intel_uncore.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so 00:06:14.385 Installing symlink pointing to librte_power_kvm_vm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so.25 00:06:14.385 Installing symlink pointing to librte_power_kvm_vm.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so 00:06:14.385 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-25.0' 00:06:14.644 06:08:33 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:06:14.644 06:08:33 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:06:14.644 00:06:14.644 real 0m41.097s 00:06:14.644 user 4m47.477s 00:06:14.644 sys 0m42.064s 00:06:14.644 06:08:33 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:14.644 06:08:33 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:06:14.644 ************************************ 00:06:14.644 END TEST build_native_dpdk 00:06:14.644 ************************************ 00:06:14.644 06:08:33 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:06:14.644 06:08:33 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:06:14.644 06:08:33 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:06:14.644 06:08:33 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:06:14.644 06:08:33 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:06:14.644 06:08:33 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:06:14.644 06:08:33 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:06:14.644 06:08:33 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang --with-shared 00:06:14.644 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:06:14.644 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:06:14.644 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:06:14.644 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:15.575 Using 'verbs' RDMA provider 00:06:26.530 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:06:36.519 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:06:36.519 go version go1.21.1 linux/amd64 00:06:36.519 Creating mk/config.mk...done. 00:06:36.519 Creating mk/cc.flags.mk...done. 00:06:36.519 Type 'make' to build. 00:06:36.519 06:08:55 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:06:36.519 06:08:55 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:36.519 06:08:55 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:36.519 06:08:55 -- common/autotest_common.sh@10 -- $ set +x 00:06:36.519 ************************************ 00:06:36.519 START TEST make 00:06:36.519 ************************************ 00:06:36.519 06:08:55 make -- common/autotest_common.sh@1129 -- $ make -j10 00:06:36.776 make[1]: Nothing to be done for 'all'. 00:07:15.569 CC lib/ut_mock/mock.o 00:07:15.569 CC lib/ut/ut.o 00:07:15.569 CC lib/log/log_flags.o 00:07:15.569 CC lib/log/log.o 00:07:15.569 CC lib/log/log_deprecated.o 00:07:15.569 LIB libspdk_ut.a 00:07:15.569 LIB libspdk_ut_mock.a 00:07:15.569 LIB libspdk_log.a 00:07:15.569 SO libspdk_ut.so.2.0 00:07:15.569 SO libspdk_ut_mock.so.6.0 00:07:15.569 SO libspdk_log.so.7.1 00:07:15.569 SYMLINK libspdk_ut_mock.so 00:07:15.569 SYMLINK libspdk_ut.so 00:07:15.569 SYMLINK libspdk_log.so 00:07:15.569 CXX lib/trace_parser/trace.o 00:07:15.569 CC lib/ioat/ioat.o 00:07:15.569 CC lib/util/bit_array.o 00:07:15.569 CC lib/util/cpuset.o 00:07:15.569 CC lib/util/base64.o 00:07:15.569 CC lib/dma/dma.o 00:07:15.569 CC lib/util/crc16.o 00:07:15.569 CC lib/util/crc32.o 00:07:15.569 CC lib/util/crc32c.o 00:07:15.569 CC lib/vfio_user/host/vfio_user_pci.o 00:07:15.569 CC lib/util/crc32_ieee.o 00:07:15.569 CC lib/vfio_user/host/vfio_user.o 00:07:15.569 CC lib/util/crc64.o 00:07:15.569 CC lib/util/dif.o 00:07:15.569 LIB libspdk_dma.a 00:07:15.570 SO libspdk_dma.so.5.0 00:07:15.570 CC lib/util/fd.o 00:07:15.570 CC lib/util/fd_group.o 00:07:15.570 CC lib/util/file.o 00:07:15.570 CC lib/util/hexlify.o 00:07:15.570 SYMLINK libspdk_dma.so 00:07:15.570 CC lib/util/iov.o 00:07:15.570 LIB libspdk_vfio_user.a 00:07:15.570 CC lib/util/math.o 00:07:15.570 CC lib/util/net.o 00:07:15.570 LIB libspdk_ioat.a 00:07:15.570 SO libspdk_vfio_user.so.5.0 00:07:15.570 SO libspdk_ioat.so.7.0 00:07:15.570 CC lib/util/pipe.o 00:07:15.570 CC lib/util/strerror_tls.o 00:07:15.570 SYMLINK libspdk_ioat.so 00:07:15.570 SYMLINK libspdk_vfio_user.so 00:07:15.570 CC lib/util/string.o 00:07:15.570 CC lib/util/uuid.o 00:07:15.570 CC lib/util/xor.o 00:07:15.570 CC lib/util/zipf.o 00:07:15.570 CC lib/util/md5.o 00:07:15.570 LIB libspdk_util.a 00:07:15.570 SO libspdk_util.so.10.1 00:07:15.570 SYMLINK libspdk_util.so 00:07:15.570 LIB libspdk_trace_parser.a 00:07:15.570 SO libspdk_trace_parser.so.6.0 00:07:15.570 CC lib/idxd/idxd.o 00:07:15.570 CC lib/idxd/idxd_user.o 00:07:15.570 CC lib/idxd/idxd_kernel.o 00:07:15.570 CC lib/rdma_utils/rdma_utils.o 00:07:15.570 CC lib/conf/conf.o 00:07:15.570 CC lib/json/json_parse.o 00:07:15.570 CC lib/json/json_util.o 00:07:15.570 CC lib/vmd/vmd.o 00:07:15.570 CC lib/env_dpdk/env.o 00:07:15.570 SYMLINK libspdk_trace_parser.so 00:07:15.570 CC lib/env_dpdk/memory.o 00:07:15.570 CC lib/json/json_write.o 00:07:15.570 CC lib/env_dpdk/pci.o 00:07:15.570 CC lib/env_dpdk/init.o 00:07:15.570 CC lib/env_dpdk/threads.o 00:07:15.570 LIB libspdk_conf.a 00:07:15.570 SO libspdk_conf.so.6.0 00:07:15.570 LIB libspdk_rdma_utils.a 00:07:15.570 SO libspdk_rdma_utils.so.1.0 00:07:15.570 SYMLINK libspdk_conf.so 00:07:15.570 CC lib/env_dpdk/pci_ioat.o 00:07:15.570 SYMLINK libspdk_rdma_utils.so 00:07:15.570 CC lib/env_dpdk/pci_virtio.o 00:07:15.570 CC lib/env_dpdk/pci_vmd.o 00:07:15.570 LIB libspdk_json.a 00:07:15.570 CC lib/env_dpdk/pci_idxd.o 00:07:15.570 SO libspdk_json.so.6.0 00:07:15.570 CC lib/vmd/led.o 00:07:15.570 CC lib/env_dpdk/pci_event.o 00:07:15.570 SYMLINK libspdk_json.so 00:07:15.570 LIB libspdk_idxd.a 00:07:15.570 CC lib/env_dpdk/sigbus_handler.o 00:07:15.570 CC lib/env_dpdk/pci_dpdk.o 00:07:15.570 SO libspdk_idxd.so.12.1 00:07:15.570 CC lib/rdma_provider/common.o 00:07:15.570 CC lib/rdma_provider/rdma_provider_verbs.o 00:07:15.570 LIB libspdk_vmd.a 00:07:15.570 SYMLINK libspdk_idxd.so 00:07:15.570 CC lib/env_dpdk/pci_dpdk_2207.o 00:07:15.570 CC lib/env_dpdk/pci_dpdk_2211.o 00:07:15.570 CC lib/jsonrpc/jsonrpc_server.o 00:07:15.570 SO libspdk_vmd.so.6.0 00:07:15.570 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:07:15.570 CC lib/jsonrpc/jsonrpc_client.o 00:07:15.570 SYMLINK libspdk_vmd.so 00:07:15.570 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:07:15.570 LIB libspdk_rdma_provider.a 00:07:15.570 SO libspdk_rdma_provider.so.7.0 00:07:15.570 SYMLINK libspdk_rdma_provider.so 00:07:15.570 LIB libspdk_jsonrpc.a 00:07:15.570 SO libspdk_jsonrpc.so.6.0 00:07:15.570 SYMLINK libspdk_jsonrpc.so 00:07:15.570 LIB libspdk_env_dpdk.a 00:07:15.570 CC lib/rpc/rpc.o 00:07:15.570 SO libspdk_env_dpdk.so.15.1 00:07:15.570 SYMLINK libspdk_env_dpdk.so 00:07:15.570 LIB libspdk_rpc.a 00:07:15.570 SO libspdk_rpc.so.6.0 00:07:15.570 SYMLINK libspdk_rpc.so 00:07:15.570 CC lib/notify/notify.o 00:07:15.570 CC lib/notify/notify_rpc.o 00:07:15.570 CC lib/trace/trace_rpc.o 00:07:15.570 CC lib/trace/trace.o 00:07:15.570 CC lib/keyring/keyring.o 00:07:15.570 CC lib/trace/trace_flags.o 00:07:15.570 CC lib/keyring/keyring_rpc.o 00:07:15.570 LIB libspdk_notify.a 00:07:15.570 SO libspdk_notify.so.6.0 00:07:15.570 LIB libspdk_keyring.a 00:07:15.570 SYMLINK libspdk_notify.so 00:07:15.570 LIB libspdk_trace.a 00:07:15.570 SO libspdk_keyring.so.2.0 00:07:15.570 SO libspdk_trace.so.11.0 00:07:15.570 SYMLINK libspdk_keyring.so 00:07:15.570 SYMLINK libspdk_trace.so 00:07:15.570 CC lib/sock/sock.o 00:07:15.570 CC lib/sock/sock_rpc.o 00:07:15.570 CC lib/thread/thread.o 00:07:15.570 CC lib/thread/iobuf.o 00:07:15.570 LIB libspdk_sock.a 00:07:15.570 SO libspdk_sock.so.10.0 00:07:15.570 SYMLINK libspdk_sock.so 00:07:15.570 CC lib/nvme/nvme_fabric.o 00:07:15.570 CC lib/nvme/nvme_ctrlr_cmd.o 00:07:15.570 CC lib/nvme/nvme_ctrlr.o 00:07:15.570 CC lib/nvme/nvme_ns.o 00:07:15.570 CC lib/nvme/nvme_qpair.o 00:07:15.570 CC lib/nvme/nvme_pcie.o 00:07:15.570 CC lib/nvme/nvme_ns_cmd.o 00:07:15.570 CC lib/nvme/nvme.o 00:07:15.570 CC lib/nvme/nvme_pcie_common.o 00:07:15.829 CC lib/nvme/nvme_quirks.o 00:07:15.829 CC lib/nvme/nvme_transport.o 00:07:15.829 CC lib/nvme/nvme_discovery.o 00:07:15.829 LIB libspdk_thread.a 00:07:15.829 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:07:15.829 SO libspdk_thread.so.11.0 00:07:15.829 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:07:16.089 CC lib/nvme/nvme_tcp.o 00:07:16.089 SYMLINK libspdk_thread.so 00:07:16.089 CC lib/nvme/nvme_opal.o 00:07:16.089 CC lib/nvme/nvme_io_msg.o 00:07:16.089 CC lib/nvme/nvme_poll_group.o 00:07:16.347 CC lib/nvme/nvme_zns.o 00:07:16.347 CC lib/nvme/nvme_stubs.o 00:07:16.347 CC lib/nvme/nvme_auth.o 00:07:16.347 CC lib/nvme/nvme_cuse.o 00:07:16.605 CC lib/nvme/nvme_rdma.o 00:07:16.605 CC lib/accel/accel.o 00:07:16.605 CC lib/blob/blobstore.o 00:07:16.605 CC lib/blob/request.o 00:07:16.862 CC lib/blob/zeroes.o 00:07:16.862 CC lib/init/json_config.o 00:07:17.121 CC lib/accel/accel_rpc.o 00:07:17.121 CC lib/virtio/virtio.o 00:07:17.121 CC lib/accel/accel_sw.o 00:07:17.121 CC lib/init/subsystem.o 00:07:17.121 CC lib/init/subsystem_rpc.o 00:07:17.121 CC lib/init/rpc.o 00:07:17.121 CC lib/blob/blob_bs_dev.o 00:07:17.121 CC lib/virtio/virtio_vhost_user.o 00:07:17.378 CC lib/virtio/virtio_vfio_user.o 00:07:17.378 CC lib/virtio/virtio_pci.o 00:07:17.378 LIB libspdk_init.a 00:07:17.378 CC lib/fsdev/fsdev.o 00:07:17.378 CC lib/fsdev/fsdev_io.o 00:07:17.378 SO libspdk_init.so.6.0 00:07:17.378 LIB libspdk_accel.a 00:07:17.378 SYMLINK libspdk_init.so 00:07:17.378 CC lib/fsdev/fsdev_rpc.o 00:07:17.378 SO libspdk_accel.so.16.0 00:07:17.636 SYMLINK libspdk_accel.so 00:07:17.636 LIB libspdk_virtio.a 00:07:17.636 SO libspdk_virtio.so.7.0 00:07:17.636 CC lib/event/app.o 00:07:17.636 CC lib/event/reactor.o 00:07:17.636 CC lib/event/log_rpc.o 00:07:17.636 CC lib/event/app_rpc.o 00:07:17.636 LIB libspdk_nvme.a 00:07:17.636 SYMLINK libspdk_virtio.so 00:07:17.636 CC lib/event/scheduler_static.o 00:07:17.636 CC lib/bdev/bdev.o 00:07:17.636 CC lib/bdev/bdev_rpc.o 00:07:17.895 CC lib/bdev/bdev_zone.o 00:07:17.895 SO libspdk_nvme.so.15.0 00:07:17.895 CC lib/bdev/part.o 00:07:17.895 CC lib/bdev/scsi_nvme.o 00:07:17.895 LIB libspdk_fsdev.a 00:07:17.895 SO libspdk_fsdev.so.2.0 00:07:17.895 SYMLINK libspdk_fsdev.so 00:07:17.895 LIB libspdk_event.a 00:07:17.895 SYMLINK libspdk_nvme.so 00:07:17.895 SO libspdk_event.so.14.0 00:07:18.156 SYMLINK libspdk_event.so 00:07:18.156 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:07:18.723 LIB libspdk_fuse_dispatcher.a 00:07:18.723 SO libspdk_fuse_dispatcher.so.1.0 00:07:18.723 SYMLINK libspdk_fuse_dispatcher.so 00:07:18.980 LIB libspdk_blob.a 00:07:18.980 SO libspdk_blob.so.12.0 00:07:19.238 SYMLINK libspdk_blob.so 00:07:19.496 CC lib/blobfs/blobfs.o 00:07:19.496 CC lib/blobfs/tree.o 00:07:19.496 CC lib/lvol/lvol.o 00:07:20.061 LIB libspdk_bdev.a 00:07:20.061 LIB libspdk_blobfs.a 00:07:20.061 SO libspdk_blobfs.so.11.0 00:07:20.061 SO libspdk_bdev.so.17.0 00:07:20.061 LIB libspdk_lvol.a 00:07:20.061 SYMLINK libspdk_blobfs.so 00:07:20.061 SO libspdk_lvol.so.11.0 00:07:20.061 SYMLINK libspdk_bdev.so 00:07:20.061 SYMLINK libspdk_lvol.so 00:07:20.061 CC lib/ftl/ftl_core.o 00:07:20.061 CC lib/ftl/ftl_init.o 00:07:20.061 CC lib/ftl/ftl_layout.o 00:07:20.061 CC lib/ftl/ftl_debug.o 00:07:20.061 CC lib/nbd/nbd.o 00:07:20.061 CC lib/ftl/ftl_io.o 00:07:20.061 CC lib/ftl/ftl_sb.o 00:07:20.061 CC lib/scsi/dev.o 00:07:20.319 CC lib/nvmf/ctrlr.o 00:07:20.319 CC lib/ublk/ublk.o 00:07:20.319 CC lib/ublk/ublk_rpc.o 00:07:20.319 CC lib/ftl/ftl_l2p.o 00:07:20.319 CC lib/nbd/nbd_rpc.o 00:07:20.319 CC lib/scsi/lun.o 00:07:20.319 CC lib/ftl/ftl_l2p_flat.o 00:07:20.658 CC lib/ftl/ftl_nv_cache.o 00:07:20.658 CC lib/ftl/ftl_band.o 00:07:20.658 CC lib/ftl/ftl_band_ops.o 00:07:20.658 CC lib/ftl/ftl_writer.o 00:07:20.658 LIB libspdk_nbd.a 00:07:20.658 CC lib/scsi/port.o 00:07:20.658 SO libspdk_nbd.so.7.0 00:07:20.658 CC lib/nvmf/ctrlr_discovery.o 00:07:20.658 LIB libspdk_ublk.a 00:07:20.658 SYMLINK libspdk_nbd.so 00:07:20.658 CC lib/ftl/ftl_rq.o 00:07:20.658 SO libspdk_ublk.so.3.0 00:07:20.658 CC lib/scsi/scsi.o 00:07:20.658 CC lib/nvmf/ctrlr_bdev.o 00:07:20.658 SYMLINK libspdk_ublk.so 00:07:20.658 CC lib/ftl/ftl_reloc.o 00:07:20.658 CC lib/nvmf/subsystem.o 00:07:20.916 CC lib/scsi/scsi_bdev.o 00:07:20.916 CC lib/scsi/scsi_pr.o 00:07:20.916 CC lib/scsi/scsi_rpc.o 00:07:20.916 CC lib/scsi/task.o 00:07:20.916 CC lib/ftl/ftl_l2p_cache.o 00:07:20.916 CC lib/ftl/ftl_p2l.o 00:07:20.916 CC lib/nvmf/nvmf.o 00:07:20.916 CC lib/nvmf/nvmf_rpc.o 00:07:21.174 CC lib/nvmf/transport.o 00:07:21.174 LIB libspdk_scsi.a 00:07:21.174 CC lib/nvmf/tcp.o 00:07:21.174 CC lib/ftl/ftl_p2l_log.o 00:07:21.174 SO libspdk_scsi.so.9.0 00:07:21.174 CC lib/nvmf/stubs.o 00:07:21.174 SYMLINK libspdk_scsi.so 00:07:21.174 CC lib/nvmf/mdns_server.o 00:07:21.174 CC lib/nvmf/rdma.o 00:07:21.432 CC lib/ftl/mngt/ftl_mngt.o 00:07:21.432 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:07:21.691 CC lib/nvmf/auth.o 00:07:21.691 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:07:21.691 CC lib/ftl/mngt/ftl_mngt_startup.o 00:07:21.691 CC lib/iscsi/conn.o 00:07:21.691 CC lib/iscsi/init_grp.o 00:07:21.691 CC lib/vhost/vhost.o 00:07:21.691 CC lib/iscsi/iscsi.o 00:07:21.691 CC lib/iscsi/param.o 00:07:21.691 CC lib/iscsi/portal_grp.o 00:07:21.950 CC lib/ftl/mngt/ftl_mngt_md.o 00:07:21.950 CC lib/ftl/mngt/ftl_mngt_misc.o 00:07:21.950 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:07:21.950 CC lib/iscsi/tgt_node.o 00:07:22.208 CC lib/vhost/vhost_rpc.o 00:07:22.208 CC lib/vhost/vhost_scsi.o 00:07:22.208 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:07:22.208 CC lib/ftl/mngt/ftl_mngt_band.o 00:07:22.208 CC lib/iscsi/iscsi_subsystem.o 00:07:22.208 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:07:22.465 CC lib/iscsi/iscsi_rpc.o 00:07:22.465 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:07:22.465 CC lib/iscsi/task.o 00:07:22.465 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:07:22.465 CC lib/vhost/vhost_blk.o 00:07:22.724 CC lib/vhost/rte_vhost_user.o 00:07:22.724 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:07:22.724 CC lib/ftl/utils/ftl_conf.o 00:07:22.724 CC lib/ftl/utils/ftl_md.o 00:07:22.724 CC lib/ftl/utils/ftl_mempool.o 00:07:22.724 LIB libspdk_iscsi.a 00:07:22.724 CC lib/ftl/utils/ftl_bitmap.o 00:07:22.724 SO libspdk_iscsi.so.8.0 00:07:22.724 CC lib/ftl/utils/ftl_property.o 00:07:22.724 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:07:22.724 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:07:22.982 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:07:22.982 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:07:22.982 SYMLINK libspdk_iscsi.so 00:07:22.982 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:07:22.982 LIB libspdk_nvmf.a 00:07:22.982 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:07:22.982 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:07:22.982 CC lib/ftl/upgrade/ftl_sb_v3.o 00:07:22.982 CC lib/ftl/upgrade/ftl_sb_v5.o 00:07:22.982 SO libspdk_nvmf.so.20.0 00:07:22.982 CC lib/ftl/nvc/ftl_nvc_dev.o 00:07:22.982 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:07:22.982 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:07:23.241 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:07:23.241 SYMLINK libspdk_nvmf.so 00:07:23.241 CC lib/ftl/base/ftl_base_dev.o 00:07:23.241 CC lib/ftl/base/ftl_base_bdev.o 00:07:23.241 CC lib/ftl/ftl_trace.o 00:07:23.241 LIB libspdk_vhost.a 00:07:23.500 SO libspdk_vhost.so.8.0 00:07:23.500 LIB libspdk_ftl.a 00:07:23.500 SYMLINK libspdk_vhost.so 00:07:23.500 SO libspdk_ftl.so.9.0 00:07:23.758 SYMLINK libspdk_ftl.so 00:07:24.017 CC module/env_dpdk/env_dpdk_rpc.o 00:07:24.017 CC module/accel/dsa/accel_dsa.o 00:07:24.017 CC module/keyring/linux/keyring.o 00:07:24.017 CC module/keyring/file/keyring.o 00:07:24.017 CC module/accel/error/accel_error.o 00:07:24.017 CC module/blob/bdev/blob_bdev.o 00:07:24.017 CC module/fsdev/aio/fsdev_aio.o 00:07:24.017 CC module/sock/posix/posix.o 00:07:24.017 CC module/scheduler/dynamic/scheduler_dynamic.o 00:07:24.017 CC module/accel/ioat/accel_ioat.o 00:07:24.275 LIB libspdk_env_dpdk_rpc.a 00:07:24.275 SO libspdk_env_dpdk_rpc.so.6.0 00:07:24.275 CC module/keyring/file/keyring_rpc.o 00:07:24.275 SYMLINK libspdk_env_dpdk_rpc.so 00:07:24.275 CC module/keyring/linux/keyring_rpc.o 00:07:24.275 CC module/accel/ioat/accel_ioat_rpc.o 00:07:24.275 CC module/accel/error/accel_error_rpc.o 00:07:24.275 LIB libspdk_scheduler_dynamic.a 00:07:24.275 SO libspdk_scheduler_dynamic.so.4.0 00:07:24.275 SYMLINK libspdk_scheduler_dynamic.so 00:07:24.275 LIB libspdk_keyring_file.a 00:07:24.275 LIB libspdk_accel_ioat.a 00:07:24.275 LIB libspdk_keyring_linux.a 00:07:24.275 LIB libspdk_blob_bdev.a 00:07:24.275 SO libspdk_keyring_file.so.2.0 00:07:24.275 SO libspdk_blob_bdev.so.12.0 00:07:24.275 SO libspdk_keyring_linux.so.1.0 00:07:24.275 CC module/accel/dsa/accel_dsa_rpc.o 00:07:24.275 SO libspdk_accel_ioat.so.6.0 00:07:24.275 LIB libspdk_accel_error.a 00:07:24.533 SYMLINK libspdk_keyring_file.so 00:07:24.533 SO libspdk_accel_error.so.2.0 00:07:24.533 CC module/fsdev/aio/fsdev_aio_rpc.o 00:07:24.533 SYMLINK libspdk_blob_bdev.so 00:07:24.533 SYMLINK libspdk_accel_ioat.so 00:07:24.533 CC module/fsdev/aio/linux_aio_mgr.o 00:07:24.533 SYMLINK libspdk_keyring_linux.so 00:07:24.533 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:07:24.533 SYMLINK libspdk_accel_error.so 00:07:24.533 CC module/accel/iaa/accel_iaa.o 00:07:24.533 LIB libspdk_accel_dsa.a 00:07:24.533 SO libspdk_accel_dsa.so.5.0 00:07:24.533 LIB libspdk_fsdev_aio.a 00:07:24.533 CC module/scheduler/gscheduler/gscheduler.o 00:07:24.533 SYMLINK libspdk_accel_dsa.so 00:07:24.533 LIB libspdk_scheduler_dpdk_governor.a 00:07:24.533 CC module/accel/iaa/accel_iaa_rpc.o 00:07:24.533 SO libspdk_fsdev_aio.so.1.0 00:07:24.533 SO libspdk_scheduler_dpdk_governor.so.4.0 00:07:24.792 CC module/bdev/error/vbdev_error.o 00:07:24.792 SYMLINK libspdk_scheduler_dpdk_governor.so 00:07:24.792 CC module/bdev/delay/vbdev_delay.o 00:07:24.792 SYMLINK libspdk_fsdev_aio.so 00:07:24.792 CC module/bdev/error/vbdev_error_rpc.o 00:07:24.792 CC module/bdev/gpt/gpt.o 00:07:24.792 CC module/blobfs/bdev/blobfs_bdev.o 00:07:24.792 LIB libspdk_sock_posix.a 00:07:24.792 LIB libspdk_scheduler_gscheduler.a 00:07:24.792 SO libspdk_sock_posix.so.6.0 00:07:24.792 LIB libspdk_accel_iaa.a 00:07:24.792 SO libspdk_scheduler_gscheduler.so.4.0 00:07:24.792 SO libspdk_accel_iaa.so.3.0 00:07:24.792 CC module/bdev/lvol/vbdev_lvol.o 00:07:24.792 SYMLINK libspdk_sock_posix.so 00:07:24.792 CC module/bdev/malloc/bdev_malloc.o 00:07:24.792 SYMLINK libspdk_scheduler_gscheduler.so 00:07:24.792 SYMLINK libspdk_accel_iaa.so 00:07:24.792 CC module/bdev/gpt/vbdev_gpt.o 00:07:24.792 CC module/bdev/delay/vbdev_delay_rpc.o 00:07:24.792 CC module/bdev/malloc/bdev_malloc_rpc.o 00:07:24.792 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:07:24.792 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:07:24.792 LIB libspdk_bdev_error.a 00:07:24.792 SO libspdk_bdev_error.so.6.0 00:07:24.792 SYMLINK libspdk_bdev_error.so 00:07:25.051 CC module/bdev/null/bdev_null.o 00:07:25.051 LIB libspdk_bdev_delay.a 00:07:25.051 LIB libspdk_blobfs_bdev.a 00:07:25.051 SO libspdk_bdev_delay.so.6.0 00:07:25.051 LIB libspdk_bdev_gpt.a 00:07:25.051 SO libspdk_blobfs_bdev.so.6.0 00:07:25.051 SO libspdk_bdev_gpt.so.6.0 00:07:25.051 CC module/bdev/nvme/bdev_nvme.o 00:07:25.051 SYMLINK libspdk_bdev_delay.so 00:07:25.051 CC module/bdev/passthru/vbdev_passthru.o 00:07:25.051 SYMLINK libspdk_blobfs_bdev.so 00:07:25.051 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:07:25.051 CC module/bdev/nvme/bdev_nvme_rpc.o 00:07:25.051 SYMLINK libspdk_bdev_gpt.so 00:07:25.051 CC module/bdev/nvme/nvme_rpc.o 00:07:25.051 CC module/bdev/raid/bdev_raid.o 00:07:25.051 CC module/bdev/null/bdev_null_rpc.o 00:07:25.051 LIB libspdk_bdev_malloc.a 00:07:25.051 LIB libspdk_bdev_lvol.a 00:07:25.051 SO libspdk_bdev_malloc.so.6.0 00:07:25.051 SO libspdk_bdev_lvol.so.6.0 00:07:25.051 CC module/bdev/split/vbdev_split.o 00:07:25.316 SYMLINK libspdk_bdev_malloc.so 00:07:25.316 CC module/bdev/raid/bdev_raid_rpc.o 00:07:25.316 CC module/bdev/raid/bdev_raid_sb.o 00:07:25.316 SYMLINK libspdk_bdev_lvol.so 00:07:25.316 CC module/bdev/raid/raid0.o 00:07:25.316 CC module/bdev/raid/raid1.o 00:07:25.316 LIB libspdk_bdev_null.a 00:07:25.316 SO libspdk_bdev_null.so.6.0 00:07:25.316 SYMLINK libspdk_bdev_null.so 00:07:25.316 LIB libspdk_bdev_passthru.a 00:07:25.316 SO libspdk_bdev_passthru.so.6.0 00:07:25.316 CC module/bdev/raid/concat.o 00:07:25.316 CC module/bdev/split/vbdev_split_rpc.o 00:07:25.316 CC module/bdev/nvme/bdev_mdns_client.o 00:07:25.316 SYMLINK libspdk_bdev_passthru.so 00:07:25.316 CC module/bdev/nvme/vbdev_opal.o 00:07:25.316 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:25.574 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:25.574 LIB libspdk_bdev_split.a 00:07:25.574 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:25.574 SO libspdk_bdev_split.so.6.0 00:07:25.574 CC module/bdev/aio/bdev_aio.o 00:07:25.574 CC module/bdev/ftl/bdev_ftl.o 00:07:25.574 SYMLINK libspdk_bdev_split.so 00:07:25.574 CC module/bdev/ftl/bdev_ftl_rpc.o 00:07:25.574 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:25.574 CC module/bdev/aio/bdev_aio_rpc.o 00:07:25.574 LIB libspdk_bdev_zone_block.a 00:07:25.832 SO libspdk_bdev_zone_block.so.6.0 00:07:25.832 LIB libspdk_bdev_raid.a 00:07:25.832 SYMLINK libspdk_bdev_zone_block.so 00:07:25.832 SO libspdk_bdev_raid.so.6.0 00:07:25.832 LIB libspdk_bdev_aio.a 00:07:25.832 CC module/bdev/iscsi/bdev_iscsi.o 00:07:25.832 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:07:25.832 SO libspdk_bdev_aio.so.6.0 00:07:25.832 CC module/bdev/virtio/bdev_virtio_scsi.o 00:07:25.832 CC module/bdev/virtio/bdev_virtio_blk.o 00:07:25.832 CC module/bdev/virtio/bdev_virtio_rpc.o 00:07:25.832 LIB libspdk_bdev_ftl.a 00:07:25.832 SYMLINK libspdk_bdev_raid.so 00:07:25.832 SO libspdk_bdev_ftl.so.6.0 00:07:25.832 SYMLINK libspdk_bdev_aio.so 00:07:25.832 SYMLINK libspdk_bdev_ftl.so 00:07:26.090 LIB libspdk_bdev_iscsi.a 00:07:26.090 SO libspdk_bdev_iscsi.so.6.0 00:07:26.090 LIB libspdk_bdev_virtio.a 00:07:26.090 SO libspdk_bdev_virtio.so.6.0 00:07:26.090 SYMLINK libspdk_bdev_iscsi.so 00:07:26.348 SYMLINK libspdk_bdev_virtio.so 00:07:26.605 LIB libspdk_bdev_nvme.a 00:07:26.605 SO libspdk_bdev_nvme.so.7.1 00:07:26.862 SYMLINK libspdk_bdev_nvme.so 00:07:27.120 CC module/event/subsystems/vmd/vmd.o 00:07:27.120 CC module/event/subsystems/keyring/keyring.o 00:07:27.120 CC module/event/subsystems/vmd/vmd_rpc.o 00:07:27.120 CC module/event/subsystems/iobuf/iobuf.o 00:07:27.120 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:07:27.120 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:07:27.120 CC module/event/subsystems/fsdev/fsdev.o 00:07:27.120 CC module/event/subsystems/scheduler/scheduler.o 00:07:27.120 CC module/event/subsystems/sock/sock.o 00:07:27.378 LIB libspdk_event_keyring.a 00:07:27.378 LIB libspdk_event_vhost_blk.a 00:07:27.378 LIB libspdk_event_vmd.a 00:07:27.378 LIB libspdk_event_iobuf.a 00:07:27.378 LIB libspdk_event_sock.a 00:07:27.378 SO libspdk_event_keyring.so.1.0 00:07:27.378 LIB libspdk_event_fsdev.a 00:07:27.378 SO libspdk_event_vhost_blk.so.3.0 00:07:27.378 LIB libspdk_event_scheduler.a 00:07:27.378 SO libspdk_event_vmd.so.6.0 00:07:27.378 SO libspdk_event_sock.so.5.0 00:07:27.378 SO libspdk_event_iobuf.so.3.0 00:07:27.378 SO libspdk_event_fsdev.so.1.0 00:07:27.378 SO libspdk_event_scheduler.so.4.0 00:07:27.378 SYMLINK libspdk_event_keyring.so 00:07:27.378 SYMLINK libspdk_event_vhost_blk.so 00:07:27.378 SYMLINK libspdk_event_vmd.so 00:07:27.378 SYMLINK libspdk_event_sock.so 00:07:27.378 SYMLINK libspdk_event_iobuf.so 00:07:27.378 SYMLINK libspdk_event_fsdev.so 00:07:27.378 SYMLINK libspdk_event_scheduler.so 00:07:27.636 CC module/event/subsystems/accel/accel.o 00:07:27.636 LIB libspdk_event_accel.a 00:07:27.636 SO libspdk_event_accel.so.6.0 00:07:27.894 SYMLINK libspdk_event_accel.so 00:07:27.894 CC module/event/subsystems/bdev/bdev.o 00:07:28.152 LIB libspdk_event_bdev.a 00:07:28.152 SO libspdk_event_bdev.so.6.0 00:07:28.152 SYMLINK libspdk_event_bdev.so 00:07:28.410 CC module/event/subsystems/ublk/ublk.o 00:07:28.410 CC module/event/subsystems/nbd/nbd.o 00:07:28.410 CC module/event/subsystems/scsi/scsi.o 00:07:28.410 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:07:28.410 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:07:28.410 LIB libspdk_event_nbd.a 00:07:28.410 SO libspdk_event_nbd.so.6.0 00:07:28.410 LIB libspdk_event_ublk.a 00:07:28.410 LIB libspdk_event_scsi.a 00:07:28.410 SO libspdk_event_ublk.so.3.0 00:07:28.410 SYMLINK libspdk_event_nbd.so 00:07:28.410 SO libspdk_event_scsi.so.6.0 00:07:28.410 SYMLINK libspdk_event_ublk.so 00:07:28.668 LIB libspdk_event_nvmf.a 00:07:28.668 SYMLINK libspdk_event_scsi.so 00:07:28.668 SO libspdk_event_nvmf.so.6.0 00:07:28.668 SYMLINK libspdk_event_nvmf.so 00:07:28.668 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:07:28.668 CC module/event/subsystems/iscsi/iscsi.o 00:07:28.927 LIB libspdk_event_vhost_scsi.a 00:07:28.927 SO libspdk_event_vhost_scsi.so.3.0 00:07:28.927 LIB libspdk_event_iscsi.a 00:07:28.927 SO libspdk_event_iscsi.so.6.0 00:07:28.927 SYMLINK libspdk_event_vhost_scsi.so 00:07:28.927 SYMLINK libspdk_event_iscsi.so 00:07:29.185 SO libspdk.so.6.0 00:07:29.185 SYMLINK libspdk.so 00:07:29.185 CC app/trace_record/trace_record.o 00:07:29.185 CC test/rpc_client/rpc_client_test.o 00:07:29.185 CXX app/trace/trace.o 00:07:29.185 TEST_HEADER include/spdk/accel.h 00:07:29.185 TEST_HEADER include/spdk/accel_module.h 00:07:29.185 TEST_HEADER include/spdk/assert.h 00:07:29.185 TEST_HEADER include/spdk/barrier.h 00:07:29.185 TEST_HEADER include/spdk/base64.h 00:07:29.185 TEST_HEADER include/spdk/bdev.h 00:07:29.185 TEST_HEADER include/spdk/bdev_module.h 00:07:29.185 TEST_HEADER include/spdk/bdev_zone.h 00:07:29.185 TEST_HEADER include/spdk/bit_array.h 00:07:29.185 TEST_HEADER include/spdk/bit_pool.h 00:07:29.185 TEST_HEADER include/spdk/blob_bdev.h 00:07:29.185 TEST_HEADER include/spdk/blobfs_bdev.h 00:07:29.185 TEST_HEADER include/spdk/blobfs.h 00:07:29.185 TEST_HEADER include/spdk/blob.h 00:07:29.185 TEST_HEADER include/spdk/conf.h 00:07:29.185 TEST_HEADER include/spdk/config.h 00:07:29.444 TEST_HEADER include/spdk/cpuset.h 00:07:29.444 TEST_HEADER include/spdk/crc16.h 00:07:29.444 TEST_HEADER include/spdk/crc32.h 00:07:29.444 TEST_HEADER include/spdk/crc64.h 00:07:29.444 TEST_HEADER include/spdk/dif.h 00:07:29.444 TEST_HEADER include/spdk/dma.h 00:07:29.444 TEST_HEADER include/spdk/endian.h 00:07:29.444 TEST_HEADER include/spdk/env_dpdk.h 00:07:29.444 TEST_HEADER include/spdk/env.h 00:07:29.444 TEST_HEADER include/spdk/event.h 00:07:29.444 TEST_HEADER include/spdk/fd_group.h 00:07:29.444 TEST_HEADER include/spdk/fd.h 00:07:29.444 CC app/nvmf_tgt/nvmf_main.o 00:07:29.444 TEST_HEADER include/spdk/file.h 00:07:29.444 TEST_HEADER include/spdk/fsdev.h 00:07:29.444 TEST_HEADER include/spdk/fsdev_module.h 00:07:29.444 TEST_HEADER include/spdk/ftl.h 00:07:29.444 TEST_HEADER include/spdk/fuse_dispatcher.h 00:07:29.444 TEST_HEADER include/spdk/gpt_spec.h 00:07:29.444 TEST_HEADER include/spdk/hexlify.h 00:07:29.444 TEST_HEADER include/spdk/histogram_data.h 00:07:29.444 TEST_HEADER include/spdk/idxd.h 00:07:29.444 CC test/thread/poller_perf/poller_perf.o 00:07:29.444 TEST_HEADER include/spdk/idxd_spec.h 00:07:29.444 CC examples/util/zipf/zipf.o 00:07:29.444 TEST_HEADER include/spdk/init.h 00:07:29.444 TEST_HEADER include/spdk/ioat.h 00:07:29.444 TEST_HEADER include/spdk/ioat_spec.h 00:07:29.444 TEST_HEADER include/spdk/iscsi_spec.h 00:07:29.444 TEST_HEADER include/spdk/json.h 00:07:29.444 TEST_HEADER include/spdk/jsonrpc.h 00:07:29.444 TEST_HEADER include/spdk/keyring.h 00:07:29.444 TEST_HEADER include/spdk/keyring_module.h 00:07:29.444 TEST_HEADER include/spdk/likely.h 00:07:29.444 TEST_HEADER include/spdk/log.h 00:07:29.444 TEST_HEADER include/spdk/lvol.h 00:07:29.444 TEST_HEADER include/spdk/md5.h 00:07:29.444 CC test/env/mem_callbacks/mem_callbacks.o 00:07:29.444 CC test/dma/test_dma/test_dma.o 00:07:29.444 TEST_HEADER include/spdk/memory.h 00:07:29.444 CC test/app/bdev_svc/bdev_svc.o 00:07:29.444 LINK rpc_client_test 00:07:29.444 TEST_HEADER include/spdk/mmio.h 00:07:29.444 TEST_HEADER include/spdk/nbd.h 00:07:29.444 TEST_HEADER include/spdk/net.h 00:07:29.444 TEST_HEADER include/spdk/notify.h 00:07:29.444 TEST_HEADER include/spdk/nvme.h 00:07:29.444 TEST_HEADER include/spdk/nvme_intel.h 00:07:29.444 TEST_HEADER include/spdk/nvme_ocssd.h 00:07:29.444 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:07:29.444 TEST_HEADER include/spdk/nvme_spec.h 00:07:29.444 TEST_HEADER include/spdk/nvme_zns.h 00:07:29.444 TEST_HEADER include/spdk/nvmf_cmd.h 00:07:29.444 LINK spdk_trace_record 00:07:29.444 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:07:29.444 TEST_HEADER include/spdk/nvmf.h 00:07:29.444 TEST_HEADER include/spdk/nvmf_spec.h 00:07:29.444 TEST_HEADER include/spdk/nvmf_transport.h 00:07:29.444 TEST_HEADER include/spdk/opal.h 00:07:29.444 TEST_HEADER include/spdk/opal_spec.h 00:07:29.444 TEST_HEADER include/spdk/pci_ids.h 00:07:29.444 TEST_HEADER include/spdk/pipe.h 00:07:29.444 TEST_HEADER include/spdk/queue.h 00:07:29.444 TEST_HEADER include/spdk/reduce.h 00:07:29.444 TEST_HEADER include/spdk/rpc.h 00:07:29.444 TEST_HEADER include/spdk/scheduler.h 00:07:29.444 LINK zipf 00:07:29.444 TEST_HEADER include/spdk/scsi.h 00:07:29.444 TEST_HEADER include/spdk/scsi_spec.h 00:07:29.444 TEST_HEADER include/spdk/sock.h 00:07:29.444 TEST_HEADER include/spdk/stdinc.h 00:07:29.444 TEST_HEADER include/spdk/string.h 00:07:29.444 TEST_HEADER include/spdk/thread.h 00:07:29.444 TEST_HEADER include/spdk/trace.h 00:07:29.444 TEST_HEADER include/spdk/trace_parser.h 00:07:29.444 TEST_HEADER include/spdk/tree.h 00:07:29.444 TEST_HEADER include/spdk/ublk.h 00:07:29.444 TEST_HEADER include/spdk/util.h 00:07:29.444 TEST_HEADER include/spdk/uuid.h 00:07:29.444 TEST_HEADER include/spdk/version.h 00:07:29.444 TEST_HEADER include/spdk/vfio_user_pci.h 00:07:29.444 TEST_HEADER include/spdk/vfio_user_spec.h 00:07:29.444 TEST_HEADER include/spdk/vhost.h 00:07:29.444 TEST_HEADER include/spdk/vmd.h 00:07:29.444 TEST_HEADER include/spdk/xor.h 00:07:29.444 TEST_HEADER include/spdk/zipf.h 00:07:29.444 CXX test/cpp_headers/accel.o 00:07:29.444 LINK poller_perf 00:07:29.444 LINK nvmf_tgt 00:07:29.444 CXX test/cpp_headers/accel_module.o 00:07:29.445 LINK bdev_svc 00:07:29.704 CC test/env/vtophys/vtophys.o 00:07:29.704 LINK spdk_trace 00:07:29.704 CXX test/cpp_headers/assert.o 00:07:29.704 LINK test_dma 00:07:29.704 LINK vtophys 00:07:29.704 CC examples/ioat/perf/perf.o 00:07:29.704 CXX test/cpp_headers/barrier.o 00:07:29.704 CC examples/vmd/lsvmd/lsvmd.o 00:07:29.704 LINK mem_callbacks 00:07:29.704 CC examples/idxd/perf/perf.o 00:07:29.962 CC app/iscsi_tgt/iscsi_tgt.o 00:07:29.962 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:07:29.962 LINK lsvmd 00:07:29.962 CXX test/cpp_headers/base64.o 00:07:29.962 LINK ioat_perf 00:07:29.962 CXX test/cpp_headers/bdev.o 00:07:29.962 CC test/app/histogram_perf/histogram_perf.o 00:07:29.962 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:07:29.962 LINK idxd_perf 00:07:29.962 LINK iscsi_tgt 00:07:29.962 LINK histogram_perf 00:07:29.962 CXX test/cpp_headers/bdev_module.o 00:07:29.962 CC examples/ioat/verify/verify.o 00:07:30.220 CC examples/vmd/led/led.o 00:07:30.220 LINK env_dpdk_post_init 00:07:30.220 CXX test/cpp_headers/bdev_zone.o 00:07:30.220 LINK nvme_fuzz 00:07:30.220 CC test/event/event_perf/event_perf.o 00:07:30.220 LINK verify 00:07:30.220 LINK led 00:07:30.220 CXX test/cpp_headers/bit_array.o 00:07:30.220 CC test/event/reactor/reactor.o 00:07:30.220 CC test/env/memory/memory_ut.o 00:07:30.220 CC app/spdk_lspci/spdk_lspci.o 00:07:30.220 CC app/spdk_tgt/spdk_tgt.o 00:07:30.220 LINK event_perf 00:07:30.479 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:07:30.479 LINK reactor 00:07:30.479 CXX test/cpp_headers/bit_pool.o 00:07:30.479 CC app/spdk_nvme_perf/perf.o 00:07:30.479 LINK spdk_lspci 00:07:30.479 CXX test/cpp_headers/blob_bdev.o 00:07:30.479 CC app/spdk_nvme_identify/identify.o 00:07:30.479 LINK spdk_tgt 00:07:30.479 CXX test/cpp_headers/blobfs_bdev.o 00:07:30.479 CC test/event/reactor_perf/reactor_perf.o 00:07:30.750 CXX test/cpp_headers/blobfs.o 00:07:30.750 CC test/nvme/aer/aer.o 00:07:30.750 LINK reactor_perf 00:07:30.750 CC test/accel/dif/dif.o 00:07:30.750 CXX test/cpp_headers/blob.o 00:07:31.043 CC test/event/app_repeat/app_repeat.o 00:07:31.043 LINK aer 00:07:31.043 CXX test/cpp_headers/conf.o 00:07:31.043 CC test/blobfs/mkfs/mkfs.o 00:07:31.043 LINK spdk_nvme_perf 00:07:31.043 LINK spdk_nvme_identify 00:07:31.043 LINK app_repeat 00:07:31.043 CXX test/cpp_headers/config.o 00:07:31.043 CXX test/cpp_headers/cpuset.o 00:07:31.043 LINK memory_ut 00:07:31.043 CC test/nvme/reset/reset.o 00:07:31.043 CXX test/cpp_headers/crc16.o 00:07:31.043 LINK dif 00:07:31.301 LINK mkfs 00:07:31.301 CC app/spdk_nvme_discover/discovery_aer.o 00:07:31.301 CC test/nvme/sgl/sgl.o 00:07:31.301 CC test/env/pci/pci_ut.o 00:07:31.301 CC test/event/scheduler/scheduler.o 00:07:31.301 CXX test/cpp_headers/crc32.o 00:07:31.301 CXX test/cpp_headers/crc64.o 00:07:31.301 LINK reset 00:07:31.301 CXX test/cpp_headers/dif.o 00:07:31.301 LINK spdk_nvme_discover 00:07:31.559 LINK sgl 00:07:31.559 LINK scheduler 00:07:31.559 CC test/app/jsoncat/jsoncat.o 00:07:31.559 CXX test/cpp_headers/dma.o 00:07:31.559 LINK pci_ut 00:07:31.559 CC test/app/stub/stub.o 00:07:31.559 CC app/spdk_top/spdk_top.o 00:07:31.559 LINK iscsi_fuzz 00:07:31.559 LINK jsoncat 00:07:31.559 CC test/lvol/esnap/esnap.o 00:07:31.559 CXX test/cpp_headers/endian.o 00:07:31.559 CC test/nvme/e2edp/nvme_dp.o 00:07:31.817 CXX test/cpp_headers/env_dpdk.o 00:07:31.817 CC app/vhost/vhost.o 00:07:31.817 LINK stub 00:07:31.817 CXX test/cpp_headers/env.o 00:07:31.817 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:07:31.817 CXX test/cpp_headers/event.o 00:07:31.817 LINK nvme_dp 00:07:31.817 LINK vhost 00:07:31.817 CC test/bdev/bdevio/bdevio.o 00:07:32.075 CXX test/cpp_headers/fd_group.o 00:07:32.075 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:07:32.075 CC app/spdk_dd/spdk_dd.o 00:07:32.075 CC test/nvme/overhead/overhead.o 00:07:32.075 CXX test/cpp_headers/fd.o 00:07:32.075 CXX test/cpp_headers/file.o 00:07:32.075 CC app/fio/nvme/fio_plugin.o 00:07:32.334 LINK bdevio 00:07:32.334 CXX test/cpp_headers/fsdev.o 00:07:32.334 CC test/nvme/err_injection/err_injection.o 00:07:32.334 LINK vhost_fuzz 00:07:32.334 LINK spdk_dd 00:07:32.334 LINK overhead 00:07:32.334 LINK spdk_top 00:07:32.334 CXX test/cpp_headers/fsdev_module.o 00:07:32.334 CXX test/cpp_headers/ftl.o 00:07:32.334 CXX test/cpp_headers/fuse_dispatcher.o 00:07:32.592 CC examples/interrupt_tgt/interrupt_tgt.o 00:07:32.592 LINK err_injection 00:07:32.592 CXX test/cpp_headers/gpt_spec.o 00:07:32.592 CXX test/cpp_headers/hexlify.o 00:07:32.592 LINK spdk_nvme 00:07:32.592 LINK interrupt_tgt 00:07:32.592 CXX test/cpp_headers/histogram_data.o 00:07:32.592 CC app/fio/bdev/fio_plugin.o 00:07:32.850 CC test/nvme/startup/startup.o 00:07:32.850 CC examples/thread/thread/thread_ex.o 00:07:32.850 CC test/nvme/reserve/reserve.o 00:07:32.850 CC test/nvme/simple_copy/simple_copy.o 00:07:32.850 CC examples/sock/hello_world/hello_sock.o 00:07:32.850 CXX test/cpp_headers/idxd.o 00:07:32.850 LINK startup 00:07:32.850 CC test/nvme/connect_stress/connect_stress.o 00:07:33.109 LINK reserve 00:07:33.109 LINK simple_copy 00:07:33.109 LINK thread 00:07:33.109 CXX test/cpp_headers/idxd_spec.o 00:07:33.109 LINK hello_sock 00:07:33.109 CC test/nvme/boot_partition/boot_partition.o 00:07:33.109 LINK spdk_bdev 00:07:33.368 CXX test/cpp_headers/init.o 00:07:33.368 LINK connect_stress 00:07:33.368 CC test/nvme/compliance/nvme_compliance.o 00:07:33.368 LINK boot_partition 00:07:33.368 CC test/nvme/doorbell_aers/doorbell_aers.o 00:07:33.368 CC test/nvme/fused_ordering/fused_ordering.o 00:07:33.368 CXX test/cpp_headers/ioat.o 00:07:33.368 CC examples/accel/perf/accel_perf.o 00:07:33.627 CC examples/blob/hello_world/hello_blob.o 00:07:33.627 LINK fused_ordering 00:07:33.627 LINK doorbell_aers 00:07:33.627 CXX test/cpp_headers/ioat_spec.o 00:07:33.627 CC examples/nvme/hello_world/hello_world.o 00:07:33.627 LINK nvme_compliance 00:07:33.627 CXX test/cpp_headers/iscsi_spec.o 00:07:33.627 LINK hello_world 00:07:33.627 CC examples/fsdev/hello_world/hello_fsdev.o 00:07:33.627 CC test/nvme/fdp/fdp.o 00:07:33.885 LINK accel_perf 00:07:33.885 CC test/nvme/cuse/cuse.o 00:07:33.885 CXX test/cpp_headers/json.o 00:07:33.885 LINK hello_blob 00:07:33.885 CC examples/blob/cli/blobcli.o 00:07:33.885 CXX test/cpp_headers/jsonrpc.o 00:07:33.885 CC examples/nvme/nvme_manage/nvme_manage.o 00:07:33.885 CC examples/nvme/reconnect/reconnect.o 00:07:33.885 LINK hello_fsdev 00:07:34.143 LINK fdp 00:07:34.143 CXX test/cpp_headers/keyring.o 00:07:34.143 CC examples/nvme/arbitration/arbitration.o 00:07:34.143 CXX test/cpp_headers/keyring_module.o 00:07:34.143 CC examples/nvme/hotplug/hotplug.o 00:07:34.143 LINK blobcli 00:07:34.143 LINK reconnect 00:07:34.401 LINK nvme_manage 00:07:34.401 CC examples/bdev/hello_world/hello_bdev.o 00:07:34.401 CXX test/cpp_headers/likely.o 00:07:34.401 LINK arbitration 00:07:34.401 CXX test/cpp_headers/log.o 00:07:34.401 CC examples/nvme/cmb_copy/cmb_copy.o 00:07:34.401 LINK hotplug 00:07:34.401 CXX test/cpp_headers/lvol.o 00:07:34.401 CXX test/cpp_headers/md5.o 00:07:34.401 LINK hello_bdev 00:07:34.401 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:07:34.401 CC examples/nvme/abort/abort.o 00:07:34.659 CXX test/cpp_headers/memory.o 00:07:34.659 LINK cmb_copy 00:07:34.659 CXX test/cpp_headers/mmio.o 00:07:34.659 LINK cuse 00:07:34.659 CXX test/cpp_headers/nbd.o 00:07:34.659 CXX test/cpp_headers/net.o 00:07:34.659 LINK pmr_persistence 00:07:34.659 CXX test/cpp_headers/notify.o 00:07:34.659 CC examples/bdev/bdevperf/bdevperf.o 00:07:34.659 CXX test/cpp_headers/nvme.o 00:07:34.659 CXX test/cpp_headers/nvme_intel.o 00:07:34.916 CXX test/cpp_headers/nvme_ocssd.o 00:07:34.916 CXX test/cpp_headers/nvme_ocssd_spec.o 00:07:34.916 LINK abort 00:07:34.916 CXX test/cpp_headers/nvme_spec.o 00:07:34.916 CXX test/cpp_headers/nvme_zns.o 00:07:34.916 CXX test/cpp_headers/nvmf_cmd.o 00:07:34.916 CXX test/cpp_headers/nvmf_fc_spec.o 00:07:34.916 CXX test/cpp_headers/nvmf.o 00:07:34.916 CXX test/cpp_headers/nvmf_spec.o 00:07:34.916 CXX test/cpp_headers/nvmf_transport.o 00:07:34.916 CXX test/cpp_headers/opal.o 00:07:34.916 CXX test/cpp_headers/opal_spec.o 00:07:34.916 CXX test/cpp_headers/pci_ids.o 00:07:34.916 CXX test/cpp_headers/pipe.o 00:07:34.916 CXX test/cpp_headers/queue.o 00:07:34.916 CXX test/cpp_headers/reduce.o 00:07:35.174 CXX test/cpp_headers/rpc.o 00:07:35.174 CXX test/cpp_headers/scheduler.o 00:07:35.174 CXX test/cpp_headers/scsi.o 00:07:35.174 CXX test/cpp_headers/scsi_spec.o 00:07:35.174 CXX test/cpp_headers/sock.o 00:07:35.174 CXX test/cpp_headers/stdinc.o 00:07:35.174 CXX test/cpp_headers/string.o 00:07:35.174 CXX test/cpp_headers/thread.o 00:07:35.174 CXX test/cpp_headers/trace.o 00:07:35.174 CXX test/cpp_headers/trace_parser.o 00:07:35.174 CXX test/cpp_headers/tree.o 00:07:35.174 CXX test/cpp_headers/ublk.o 00:07:35.174 CXX test/cpp_headers/util.o 00:07:35.174 CXX test/cpp_headers/uuid.o 00:07:35.174 CXX test/cpp_headers/version.o 00:07:35.174 CXX test/cpp_headers/vfio_user_pci.o 00:07:35.174 LINK bdevperf 00:07:35.174 CXX test/cpp_headers/vfio_user_spec.o 00:07:35.174 CXX test/cpp_headers/vhost.o 00:07:35.174 CXX test/cpp_headers/vmd.o 00:07:35.174 CXX test/cpp_headers/xor.o 00:07:35.431 CXX test/cpp_headers/zipf.o 00:07:35.688 CC examples/nvmf/nvmf/nvmf.o 00:07:35.946 LINK nvmf 00:07:35.946 LINK esnap 00:07:36.204 00:07:36.204 real 0m59.692s 00:07:36.204 user 5m6.924s 00:07:36.204 sys 0m50.725s 00:07:36.204 06:09:55 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:07:36.204 06:09:55 make -- common/autotest_common.sh@10 -- $ set +x 00:07:36.204 ************************************ 00:07:36.204 END TEST make 00:07:36.204 ************************************ 00:07:36.204 06:09:55 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:07:36.204 06:09:55 -- pm/common@29 -- $ signal_monitor_resources TERM 00:07:36.204 06:09:55 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:07:36.204 06:09:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:36.204 06:09:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:07:36.204 06:09:55 -- pm/common@44 -- $ pid=5793 00:07:36.204 06:09:55 -- pm/common@50 -- $ kill -TERM 5793 00:07:36.204 06:09:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:36.204 06:09:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:07:36.204 06:09:55 -- pm/common@44 -- $ pid=5794 00:07:36.204 06:09:55 -- pm/common@50 -- $ kill -TERM 5794 00:07:36.204 06:09:55 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:07:36.204 06:09:55 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:07:36.204 06:09:55 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:36.204 06:09:55 -- common/autotest_common.sh@1693 -- # lcov --version 00:07:36.204 06:09:55 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:36.463 06:09:55 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:36.463 06:09:55 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:36.463 06:09:55 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:36.463 06:09:55 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:36.463 06:09:55 -- scripts/common.sh@336 -- # IFS=.-: 00:07:36.463 06:09:55 -- scripts/common.sh@336 -- # read -ra ver1 00:07:36.463 06:09:55 -- scripts/common.sh@337 -- # IFS=.-: 00:07:36.463 06:09:55 -- scripts/common.sh@337 -- # read -ra ver2 00:07:36.463 06:09:55 -- scripts/common.sh@338 -- # local 'op=<' 00:07:36.463 06:09:55 -- scripts/common.sh@340 -- # ver1_l=2 00:07:36.463 06:09:55 -- scripts/common.sh@341 -- # ver2_l=1 00:07:36.463 06:09:55 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:36.463 06:09:55 -- scripts/common.sh@344 -- # case "$op" in 00:07:36.463 06:09:55 -- scripts/common.sh@345 -- # : 1 00:07:36.463 06:09:55 -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:36.463 06:09:55 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:36.463 06:09:55 -- scripts/common.sh@365 -- # decimal 1 00:07:36.463 06:09:55 -- scripts/common.sh@353 -- # local d=1 00:07:36.463 06:09:55 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:36.463 06:09:55 -- scripts/common.sh@355 -- # echo 1 00:07:36.463 06:09:55 -- scripts/common.sh@365 -- # ver1[v]=1 00:07:36.463 06:09:55 -- scripts/common.sh@366 -- # decimal 2 00:07:36.463 06:09:55 -- scripts/common.sh@353 -- # local d=2 00:07:36.463 06:09:55 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:36.463 06:09:55 -- scripts/common.sh@355 -- # echo 2 00:07:36.463 06:09:55 -- scripts/common.sh@366 -- # ver2[v]=2 00:07:36.463 06:09:55 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:36.463 06:09:55 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:36.463 06:09:55 -- scripts/common.sh@368 -- # return 0 00:07:36.463 06:09:55 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:36.463 06:09:55 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:36.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.463 --rc genhtml_branch_coverage=1 00:07:36.463 --rc genhtml_function_coverage=1 00:07:36.463 --rc genhtml_legend=1 00:07:36.463 --rc geninfo_all_blocks=1 00:07:36.463 --rc geninfo_unexecuted_blocks=1 00:07:36.463 00:07:36.463 ' 00:07:36.463 06:09:55 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:36.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.463 --rc genhtml_branch_coverage=1 00:07:36.463 --rc genhtml_function_coverage=1 00:07:36.463 --rc genhtml_legend=1 00:07:36.463 --rc geninfo_all_blocks=1 00:07:36.463 --rc geninfo_unexecuted_blocks=1 00:07:36.463 00:07:36.463 ' 00:07:36.463 06:09:55 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:36.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.463 --rc genhtml_branch_coverage=1 00:07:36.463 --rc genhtml_function_coverage=1 00:07:36.463 --rc genhtml_legend=1 00:07:36.463 --rc geninfo_all_blocks=1 00:07:36.463 --rc geninfo_unexecuted_blocks=1 00:07:36.463 00:07:36.463 ' 00:07:36.463 06:09:55 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:36.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.463 --rc genhtml_branch_coverage=1 00:07:36.463 --rc genhtml_function_coverage=1 00:07:36.463 --rc genhtml_legend=1 00:07:36.463 --rc geninfo_all_blocks=1 00:07:36.463 --rc geninfo_unexecuted_blocks=1 00:07:36.463 00:07:36.463 ' 00:07:36.463 06:09:55 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:36.463 06:09:55 -- nvmf/common.sh@7 -- # uname -s 00:07:36.463 06:09:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:36.463 06:09:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:36.463 06:09:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:36.463 06:09:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:36.463 06:09:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:36.463 06:09:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:36.463 06:09:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:36.463 06:09:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:36.463 06:09:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:36.463 06:09:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:36.463 06:09:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:07:36.463 06:09:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:07:36.463 06:09:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:36.463 06:09:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:36.463 06:09:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:36.463 06:09:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:36.463 06:09:55 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:36.463 06:09:55 -- scripts/common.sh@15 -- # shopt -s extglob 00:07:36.463 06:09:55 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.463 06:09:55 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.463 06:09:55 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.463 06:09:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.463 06:09:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.463 06:09:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.463 06:09:55 -- paths/export.sh@5 -- # export PATH 00:07:36.463 06:09:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.463 06:09:55 -- nvmf/common.sh@51 -- # : 0 00:07:36.463 06:09:55 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:36.463 06:09:55 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:36.463 06:09:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:36.463 06:09:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:36.463 06:09:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:36.463 06:09:55 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:36.463 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:36.463 06:09:55 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:36.463 06:09:55 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:36.463 06:09:55 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:36.463 06:09:55 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:07:36.463 06:09:55 -- spdk/autotest.sh@32 -- # uname -s 00:07:36.463 06:09:55 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:07:36.463 06:09:55 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:07:36.463 06:09:55 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:36.463 06:09:55 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:07:36.463 06:09:55 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:36.463 06:09:55 -- spdk/autotest.sh@44 -- # modprobe nbd 00:07:36.463 06:09:55 -- spdk/autotest.sh@46 -- # type -P udevadm 00:07:36.463 06:09:55 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:07:36.463 06:09:55 -- spdk/autotest.sh@48 -- # udevadm_pid=69376 00:07:36.463 06:09:55 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:07:36.463 06:09:55 -- pm/common@17 -- # local monitor 00:07:36.463 06:09:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:36.463 06:09:55 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:07:36.463 06:09:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:36.463 06:09:55 -- pm/common@25 -- # sleep 1 00:07:36.463 06:09:55 -- pm/common@21 -- # date +%s 00:07:36.463 06:09:55 -- pm/common@21 -- # date +%s 00:07:36.463 06:09:55 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732687795 00:07:36.463 06:09:55 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732687795 00:07:36.464 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732687795_collect-cpu-load.pm.log 00:07:36.464 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732687795_collect-vmstat.pm.log 00:07:37.418 06:09:56 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:07:37.418 06:09:56 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:07:37.418 06:09:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:37.418 06:09:56 -- common/autotest_common.sh@10 -- # set +x 00:07:37.418 06:09:56 -- spdk/autotest.sh@59 -- # create_test_list 00:07:37.418 06:09:56 -- common/autotest_common.sh@752 -- # xtrace_disable 00:07:37.418 06:09:56 -- common/autotest_common.sh@10 -- # set +x 00:07:37.418 06:09:56 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:07:37.418 06:09:56 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:07:37.418 06:09:56 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:07:37.418 06:09:56 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:07:37.418 06:09:56 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:07:37.418 06:09:56 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:07:37.418 06:09:56 -- common/autotest_common.sh@1457 -- # uname 00:07:37.418 06:09:56 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:07:37.418 06:09:56 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:07:37.418 06:09:56 -- common/autotest_common.sh@1477 -- # uname 00:07:37.418 06:09:56 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:07:37.418 06:09:56 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:07:37.418 06:09:56 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:07:37.676 lcov: LCOV version 1.15 00:07:37.676 06:09:56 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:07:49.924 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:07:49.924 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:08:04.800 06:10:23 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:08:04.800 06:10:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:04.800 06:10:23 -- common/autotest_common.sh@10 -- # set +x 00:08:04.800 06:10:23 -- spdk/autotest.sh@78 -- # rm -f 00:08:04.800 06:10:23 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:05.059 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:05.059 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:08:05.059 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:08:05.059 06:10:24 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:08:05.059 06:10:24 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:08:05.059 06:10:24 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:08:05.059 06:10:24 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:08:05.059 06:10:24 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:05.059 06:10:24 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:08:05.059 06:10:24 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:08:05.059 06:10:24 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:05.059 06:10:24 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:05.059 06:10:24 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:05.059 06:10:24 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:08:05.059 06:10:24 -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:08:05.059 06:10:24 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:08:05.059 06:10:24 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:05.059 06:10:24 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:05.059 06:10:24 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:08:05.059 06:10:24 -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:08:05.059 06:10:24 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:08:05.059 06:10:24 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:05.059 06:10:24 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:05.059 06:10:24 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:08:05.059 06:10:24 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:08:05.059 06:10:24 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:05.059 06:10:24 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:05.059 06:10:24 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:08:05.059 06:10:24 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:05.059 06:10:24 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:05.059 06:10:24 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:08:05.059 06:10:24 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:08:05.059 06:10:24 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:08:05.059 No valid GPT data, bailing 00:08:05.316 06:10:24 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:05.316 06:10:24 -- scripts/common.sh@394 -- # pt= 00:08:05.316 06:10:24 -- scripts/common.sh@395 -- # return 1 00:08:05.316 06:10:24 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:08:05.316 1+0 records in 00:08:05.316 1+0 records out 00:08:05.316 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00496296 s, 211 MB/s 00:08:05.316 06:10:24 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:05.316 06:10:24 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:05.316 06:10:24 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n2 00:08:05.316 06:10:24 -- scripts/common.sh@381 -- # local block=/dev/nvme0n2 pt 00:08:05.316 06:10:24 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n2 00:08:05.316 No valid GPT data, bailing 00:08:05.316 06:10:24 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:08:05.316 06:10:24 -- scripts/common.sh@394 -- # pt= 00:08:05.316 06:10:24 -- scripts/common.sh@395 -- # return 1 00:08:05.316 06:10:24 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n2 bs=1M count=1 00:08:05.316 1+0 records in 00:08:05.316 1+0 records out 00:08:05.316 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00332903 s, 315 MB/s 00:08:05.316 06:10:24 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:05.316 06:10:24 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:05.316 06:10:24 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n3 00:08:05.316 06:10:24 -- scripts/common.sh@381 -- # local block=/dev/nvme0n3 pt 00:08:05.316 06:10:24 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n3 00:08:05.316 No valid GPT data, bailing 00:08:05.316 06:10:24 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:08:05.316 06:10:24 -- scripts/common.sh@394 -- # pt= 00:08:05.316 06:10:24 -- scripts/common.sh@395 -- # return 1 00:08:05.316 06:10:24 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n3 bs=1M count=1 00:08:05.316 1+0 records in 00:08:05.316 1+0 records out 00:08:05.317 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00610136 s, 172 MB/s 00:08:05.317 06:10:24 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:05.317 06:10:24 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:05.317 06:10:24 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:08:05.317 06:10:24 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:08:05.317 06:10:24 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:08:05.317 No valid GPT data, bailing 00:08:05.317 06:10:24 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:08:05.574 06:10:24 -- scripts/common.sh@394 -- # pt= 00:08:05.574 06:10:24 -- scripts/common.sh@395 -- # return 1 00:08:05.574 06:10:24 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:08:05.574 1+0 records in 00:08:05.574 1+0 records out 00:08:05.574 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00455999 s, 230 MB/s 00:08:05.574 06:10:24 -- spdk/autotest.sh@105 -- # sync 00:08:05.574 06:10:25 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:08:05.574 06:10:25 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:08:05.574 06:10:25 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:08:07.481 06:10:26 -- spdk/autotest.sh@111 -- # uname -s 00:08:07.481 06:10:26 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:08:07.481 06:10:26 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:08:07.481 06:10:26 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:07.739 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:07.739 Hugepages 00:08:07.739 node hugesize free / total 00:08:07.739 node0 1048576kB 0 / 0 00:08:07.739 node0 2048kB 0 / 0 00:08:07.739 00:08:07.740 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:07.997 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:08:07.997 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:08:07.997 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:08:07.997 06:10:27 -- spdk/autotest.sh@117 -- # uname -s 00:08:07.998 06:10:27 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:08:07.998 06:10:27 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:08:07.998 06:10:27 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:08.563 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:08.563 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:08.821 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:08.821 06:10:28 -- common/autotest_common.sh@1517 -- # sleep 1 00:08:09.755 06:10:29 -- common/autotest_common.sh@1518 -- # bdfs=() 00:08:09.755 06:10:29 -- common/autotest_common.sh@1518 -- # local bdfs 00:08:09.755 06:10:29 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:08:09.755 06:10:29 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:08:09.755 06:10:29 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:09.755 06:10:29 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:09.755 06:10:29 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:09.755 06:10:29 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:09.755 06:10:29 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:09.755 06:10:29 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:08:09.755 06:10:29 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:09.755 06:10:29 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:10.013 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:10.013 Waiting for block devices as requested 00:08:10.271 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:10.271 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:10.271 06:10:29 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:10.271 06:10:29 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:08:10.271 06:10:29 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:08:10.271 06:10:29 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:08:10.271 06:10:29 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:10.271 06:10:29 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:08:10.271 06:10:29 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:10.271 06:10:29 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:08:10.271 06:10:29 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:08:10.271 06:10:29 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:08:10.271 06:10:29 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:10.271 06:10:29 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:08:10.272 06:10:29 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:10.272 06:10:29 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:08:10.272 06:10:29 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:10.272 06:10:29 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:10.272 06:10:29 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:08:10.272 06:10:29 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:10.272 06:10:29 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:10.272 06:10:29 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:10.272 06:10:29 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:10.272 06:10:29 -- common/autotest_common.sh@1543 -- # continue 00:08:10.272 06:10:29 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:10.272 06:10:29 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:08:10.272 06:10:29 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:08:10.272 06:10:29 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:08:10.272 06:10:29 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:10.272 06:10:29 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:08:10.272 06:10:29 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:10.272 06:10:29 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:08:10.272 06:10:29 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:08:10.272 06:10:29 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:08:10.272 06:10:29 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:08:10.272 06:10:29 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:10.272 06:10:29 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:10.530 06:10:29 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:08:10.530 06:10:29 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:10.530 06:10:29 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:10.530 06:10:29 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:08:10.530 06:10:29 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:10.530 06:10:29 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:10.530 06:10:29 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:10.530 06:10:29 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:10.530 06:10:29 -- common/autotest_common.sh@1543 -- # continue 00:08:10.530 06:10:29 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:08:10.530 06:10:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:10.530 06:10:29 -- common/autotest_common.sh@10 -- # set +x 00:08:10.530 06:10:29 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:08:10.530 06:10:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:10.530 06:10:29 -- common/autotest_common.sh@10 -- # set +x 00:08:10.530 06:10:29 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:11.096 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:11.096 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:11.096 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:11.354 06:10:30 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:08:11.354 06:10:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:11.354 06:10:30 -- common/autotest_common.sh@10 -- # set +x 00:08:11.354 06:10:30 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:08:11.354 06:10:30 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:08:11.354 06:10:30 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:08:11.354 06:10:30 -- common/autotest_common.sh@1563 -- # bdfs=() 00:08:11.355 06:10:30 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:08:11.355 06:10:30 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:08:11.355 06:10:30 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:08:11.355 06:10:30 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:08:11.355 06:10:30 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:11.355 06:10:30 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:11.355 06:10:30 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:11.355 06:10:30 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:11.355 06:10:30 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:11.355 06:10:30 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:08:11.355 06:10:30 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:11.355 06:10:30 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:11.355 06:10:30 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:08:11.355 06:10:30 -- common/autotest_common.sh@1566 -- # device=0x0010 00:08:11.355 06:10:30 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:11.355 06:10:30 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:11.355 06:10:30 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:08:11.355 06:10:30 -- common/autotest_common.sh@1566 -- # device=0x0010 00:08:11.355 06:10:30 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:11.355 06:10:30 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:08:11.355 06:10:30 -- common/autotest_common.sh@1572 -- # return 0 00:08:11.355 06:10:30 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:08:11.355 06:10:30 -- common/autotest_common.sh@1580 -- # return 0 00:08:11.355 06:10:30 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:08:11.355 06:10:30 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:08:11.355 06:10:30 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:11.355 06:10:30 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:11.355 06:10:30 -- spdk/autotest.sh@149 -- # timing_enter lib 00:08:11.355 06:10:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:11.355 06:10:30 -- common/autotest_common.sh@10 -- # set +x 00:08:11.355 06:10:30 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:08:11.355 06:10:30 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:11.355 06:10:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:11.355 06:10:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.355 06:10:30 -- common/autotest_common.sh@10 -- # set +x 00:08:11.355 ************************************ 00:08:11.355 START TEST env 00:08:11.355 ************************************ 00:08:11.355 06:10:30 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:11.355 * Looking for test storage... 00:08:11.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:08:11.355 06:10:30 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:11.355 06:10:30 env -- common/autotest_common.sh@1693 -- # lcov --version 00:08:11.355 06:10:30 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:11.613 06:10:30 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:11.614 06:10:30 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:11.614 06:10:30 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:11.614 06:10:30 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:11.614 06:10:30 env -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.614 06:10:30 env -- scripts/common.sh@336 -- # read -ra ver1 00:08:11.614 06:10:30 env -- scripts/common.sh@337 -- # IFS=.-: 00:08:11.614 06:10:30 env -- scripts/common.sh@337 -- # read -ra ver2 00:08:11.614 06:10:30 env -- scripts/common.sh@338 -- # local 'op=<' 00:08:11.614 06:10:30 env -- scripts/common.sh@340 -- # ver1_l=2 00:08:11.614 06:10:30 env -- scripts/common.sh@341 -- # ver2_l=1 00:08:11.614 06:10:30 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:11.614 06:10:30 env -- scripts/common.sh@344 -- # case "$op" in 00:08:11.614 06:10:30 env -- scripts/common.sh@345 -- # : 1 00:08:11.614 06:10:30 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:11.614 06:10:30 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.614 06:10:30 env -- scripts/common.sh@365 -- # decimal 1 00:08:11.614 06:10:30 env -- scripts/common.sh@353 -- # local d=1 00:08:11.614 06:10:30 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.614 06:10:30 env -- scripts/common.sh@355 -- # echo 1 00:08:11.614 06:10:30 env -- scripts/common.sh@365 -- # ver1[v]=1 00:08:11.614 06:10:30 env -- scripts/common.sh@366 -- # decimal 2 00:08:11.614 06:10:30 env -- scripts/common.sh@353 -- # local d=2 00:08:11.614 06:10:30 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.614 06:10:30 env -- scripts/common.sh@355 -- # echo 2 00:08:11.614 06:10:30 env -- scripts/common.sh@366 -- # ver2[v]=2 00:08:11.614 06:10:30 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:11.614 06:10:30 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:11.614 06:10:30 env -- scripts/common.sh@368 -- # return 0 00:08:11.614 06:10:30 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.614 06:10:30 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:11.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.614 --rc genhtml_branch_coverage=1 00:08:11.614 --rc genhtml_function_coverage=1 00:08:11.614 --rc genhtml_legend=1 00:08:11.614 --rc geninfo_all_blocks=1 00:08:11.614 --rc geninfo_unexecuted_blocks=1 00:08:11.614 00:08:11.614 ' 00:08:11.614 06:10:30 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:11.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.614 --rc genhtml_branch_coverage=1 00:08:11.614 --rc genhtml_function_coverage=1 00:08:11.614 --rc genhtml_legend=1 00:08:11.614 --rc geninfo_all_blocks=1 00:08:11.614 --rc geninfo_unexecuted_blocks=1 00:08:11.614 00:08:11.614 ' 00:08:11.614 06:10:30 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:11.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.614 --rc genhtml_branch_coverage=1 00:08:11.614 --rc genhtml_function_coverage=1 00:08:11.614 --rc genhtml_legend=1 00:08:11.614 --rc geninfo_all_blocks=1 00:08:11.614 --rc geninfo_unexecuted_blocks=1 00:08:11.614 00:08:11.614 ' 00:08:11.614 06:10:30 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:11.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.614 --rc genhtml_branch_coverage=1 00:08:11.614 --rc genhtml_function_coverage=1 00:08:11.614 --rc genhtml_legend=1 00:08:11.614 --rc geninfo_all_blocks=1 00:08:11.614 --rc geninfo_unexecuted_blocks=1 00:08:11.614 00:08:11.614 ' 00:08:11.614 06:10:30 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:11.614 06:10:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:11.614 06:10:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.614 06:10:30 env -- common/autotest_common.sh@10 -- # set +x 00:08:11.614 ************************************ 00:08:11.614 START TEST env_memory 00:08:11.614 ************************************ 00:08:11.614 06:10:30 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:11.614 00:08:11.614 00:08:11.614 CUnit - A unit testing framework for C - Version 2.1-3 00:08:11.614 http://cunit.sourceforge.net/ 00:08:11.614 00:08:11.614 00:08:11.614 Suite: memory 00:08:11.614 Test: alloc and free memory map ...[2024-11-27 06:10:30.972990] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:11.614 passed 00:08:11.614 Test: mem map translation ...[2024-11-27 06:10:30.998025] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:11.614 [2024-11-27 06:10:30.998066] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:11.614 [2024-11-27 06:10:30.998108] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:11.614 [2024-11-27 06:10:30.998114] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:11.614 passed 00:08:11.614 Test: mem map registration ...[2024-11-27 06:10:31.049413] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:08:11.614 [2024-11-27 06:10:31.049452] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:08:11.614 passed 00:08:11.614 Test: mem map adjacent registrations ...passed 00:08:11.614 00:08:11.614 Run Summary: Type Total Ran Passed Failed Inactive 00:08:11.614 suites 1 1 n/a 0 0 00:08:11.614 tests 4 4 4 0 0 00:08:11.614 asserts 152 152 152 0 n/a 00:08:11.614 00:08:11.614 Elapsed time = 0.172 seconds 00:08:11.614 00:08:11.614 real 0m0.190s 00:08:11.614 user 0m0.176s 00:08:11.614 sys 0m0.008s 00:08:11.614 06:10:31 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.614 06:10:31 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:11.614 ************************************ 00:08:11.614 END TEST env_memory 00:08:11.614 ************************************ 00:08:11.873 06:10:31 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:11.873 06:10:31 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:11.873 06:10:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.873 06:10:31 env -- common/autotest_common.sh@10 -- # set +x 00:08:11.873 ************************************ 00:08:11.873 START TEST env_vtophys 00:08:11.873 ************************************ 00:08:11.873 06:10:31 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:11.873 EAL: lib.eal log level changed from notice to debug 00:08:11.873 EAL: Detected lcore 0 as core 0 on socket 0 00:08:11.873 EAL: Detected lcore 1 as core 0 on socket 0 00:08:11.873 EAL: Detected lcore 2 as core 0 on socket 0 00:08:11.873 EAL: Detected lcore 3 as core 0 on socket 0 00:08:11.873 EAL: Detected lcore 4 as core 0 on socket 0 00:08:11.873 EAL: Detected lcore 5 as core 0 on socket 0 00:08:11.873 EAL: Detected lcore 6 as core 0 on socket 0 00:08:11.873 EAL: Detected lcore 7 as core 0 on socket 0 00:08:11.873 EAL: Detected lcore 8 as core 0 on socket 0 00:08:11.873 EAL: Detected lcore 9 as core 0 on socket 0 00:08:11.873 EAL: Maximum logical cores by configuration: 128 00:08:11.873 EAL: Detected CPU lcores: 10 00:08:11.873 EAL: Detected NUMA nodes: 1 00:08:11.873 EAL: Checking presence of .so 'librte_eal.so.25.0' 00:08:11.873 EAL: Detected shared linkage of DPDK 00:08:11.873 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25.0 00:08:11.873 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25.0 00:08:11.873 EAL: Registered [vdev] bus. 00:08:11.873 EAL: bus.vdev log level changed from disabled to notice 00:08:11.873 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25.0 00:08:11.873 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25.0 00:08:11.873 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:08:11.873 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:08:11.873 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so.25.0 00:08:11.873 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so.25.0 00:08:11.873 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so.25.0 00:08:11.873 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so.25.0 00:08:11.873 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so.25.0 00:08:11.873 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so.25.0 00:08:11.873 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:08:11.873 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:08:11.873 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:08:11.873 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:08:11.873 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so 00:08:11.873 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so 00:08:11.873 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so 00:08:11.873 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so 00:08:11.873 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so 00:08:11.873 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so 00:08:11.873 EAL: No shared files mode enabled, IPC will be disabled 00:08:11.873 EAL: No shared files mode enabled, IPC is disabled 00:08:11.873 EAL: Selected IOVA mode 'PA' 00:08:11.873 EAL: Probing VFIO support... 00:08:11.873 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:11.873 EAL: VFIO modules not loaded, skipping VFIO support... 00:08:11.873 EAL: Ask a virtual area of 0x2e000 bytes 00:08:11.873 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:11.873 EAL: Setting up physically contiguous memory... 00:08:11.873 EAL: Setting maximum number of open files to 524288 00:08:11.873 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:11.873 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:11.873 EAL: Ask a virtual area of 0x61000 bytes 00:08:11.873 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:11.873 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:11.873 EAL: Ask a virtual area of 0x400000000 bytes 00:08:11.873 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:11.873 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:11.873 EAL: Ask a virtual area of 0x61000 bytes 00:08:11.873 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:11.873 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:11.873 EAL: Ask a virtual area of 0x400000000 bytes 00:08:11.873 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:11.873 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:11.873 EAL: Ask a virtual area of 0x61000 bytes 00:08:11.873 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:11.873 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:11.873 EAL: Ask a virtual area of 0x400000000 bytes 00:08:11.873 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:11.873 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:11.873 EAL: Ask a virtual area of 0x61000 bytes 00:08:11.873 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:11.873 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:11.873 EAL: Ask a virtual area of 0x400000000 bytes 00:08:11.873 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:11.873 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:11.873 EAL: Hugepages will be freed exactly as allocated. 00:08:11.873 EAL: No shared files mode enabled, IPC is disabled 00:08:11.873 EAL: No shared files mode enabled, IPC is disabled 00:08:11.873 EAL: TSC frequency is ~2600000 KHz 00:08:11.873 EAL: Main lcore 0 is ready (tid=7ff537221a00;cpuset=[0]) 00:08:11.873 EAL: Trying to obtain current memory policy. 00:08:11.873 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:11.873 EAL: Restoring previous memory policy: 0 00:08:11.873 EAL: request: mp_malloc_sync 00:08:11.873 EAL: No shared files mode enabled, IPC is disabled 00:08:11.873 EAL: Heap on socket 0 was expanded by 2MB 00:08:11.874 EAL: Allocated 2112 bytes of per-lcore data with a 64-byte alignment 00:08:11.874 EAL: No shared files mode enabled, IPC is disabled 00:08:11.874 EAL: Mem event callback 'spdk:(nil)' registered 00:08:11.874 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:08:11.874 00:08:11.874 00:08:11.874 CUnit - A unit testing framework for C - Version 2.1-3 00:08:11.874 http://cunit.sourceforge.net/ 00:08:11.874 00:08:11.874 00:08:11.874 Suite: components_suite 00:08:11.874 Test: vtophys_malloc_test ...passed 00:08:11.874 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:11.874 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:11.874 EAL: Restoring previous memory policy: 4 00:08:11.874 EAL: Calling mem event callback 'spdk:(nil)' 00:08:11.874 EAL: request: mp_malloc_sync 00:08:11.874 EAL: No shared files mode enabled, IPC is disabled 00:08:11.874 EAL: Heap on socket 0 was expanded by 4MB 00:08:11.874 EAL: Calling mem event callback 'spdk:(nil)' 00:08:11.874 EAL: request: mp_malloc_sync 00:08:11.874 EAL: No shared files mode enabled, IPC is disabled 00:08:11.874 EAL: Heap on socket 0 was shrunk by 4MB 00:08:11.874 EAL: Trying to obtain current memory policy. 00:08:11.874 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:11.874 EAL: Restoring previous memory policy: 4 00:08:11.874 EAL: Calling mem event callback 'spdk:(nil)' 00:08:11.874 EAL: request: mp_malloc_sync 00:08:11.874 EAL: No shared files mode enabled, IPC is disabled 00:08:11.874 EAL: Heap on socket 0 was expanded by 6MB 00:08:11.874 EAL: Calling mem event callback 'spdk:(nil)' 00:08:11.874 EAL: request: mp_malloc_sync 00:08:11.874 EAL: No shared files mode enabled, IPC is disabled 00:08:11.874 EAL: Heap on socket 0 was shrunk by 6MB 00:08:11.874 EAL: Trying to obtain current memory policy. 00:08:11.874 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:11.874 EAL: Restoring previous memory policy: 4 00:08:11.874 EAL: Calling mem event callback 'spdk:(nil)' 00:08:11.874 EAL: request: mp_malloc_sync 00:08:11.874 EAL: No shared files mode enabled, IPC is disabled 00:08:11.874 EAL: Heap on socket 0 was expanded by 10MB 00:08:11.874 EAL: Calling mem event callback 'spdk:(nil)' 00:08:11.874 EAL: request: mp_malloc_sync 00:08:11.874 EAL: No shared files mode enabled, IPC is disabled 00:08:11.874 EAL: Heap on socket 0 was shrunk by 10MB 00:08:11.874 EAL: Trying to obtain current memory policy. 00:08:11.874 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:11.874 EAL: Restoring previous memory policy: 4 00:08:11.874 EAL: Calling mem event callback 'spdk:(nil)' 00:08:11.874 EAL: request: mp_malloc_sync 00:08:11.874 EAL: No shared files mode enabled, IPC is disabled 00:08:11.874 EAL: Heap on socket 0 was expanded by 18MB 00:08:11.874 EAL: Calling mem event callback 'spdk:(nil)' 00:08:11.874 EAL: request: mp_malloc_sync 00:08:11.874 EAL: No shared files mode enabled, IPC is disabled 00:08:11.874 EAL: Heap on socket 0 was shrunk by 18MB 00:08:11.874 EAL: Trying to obtain current memory policy. 00:08:11.874 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:11.874 EAL: Restoring previous memory policy: 4 00:08:11.874 EAL: Calling mem event callback 'spdk:(nil)' 00:08:11.874 EAL: request: mp_malloc_sync 00:08:11.874 EAL: No shared files mode enabled, IPC is disabled 00:08:11.874 EAL: Heap on socket 0 was expanded by 34MB 00:08:11.874 EAL: Calling mem event callback 'spdk:(nil)' 00:08:11.874 EAL: request: mp_malloc_sync 00:08:11.874 EAL: No shared files mode enabled, IPC is disabled 00:08:11.874 EAL: Heap on socket 0 was shrunk by 34MB 00:08:11.874 EAL: Trying to obtain current memory policy. 00:08:11.874 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:11.874 EAL: Restoring previous memory policy: 4 00:08:11.874 EAL: Calling mem event callback 'spdk:(nil)' 00:08:11.874 EAL: request: mp_malloc_sync 00:08:11.874 EAL: No shared files mode enabled, IPC is disabled 00:08:11.874 EAL: Heap on socket 0 was expanded by 66MB 00:08:11.874 EAL: Calling mem event callback 'spdk:(nil)' 00:08:11.874 EAL: request: mp_malloc_sync 00:08:11.874 EAL: No shared files mode enabled, IPC is disabled 00:08:11.874 EAL: Heap on socket 0 was shrunk by 66MB 00:08:11.874 EAL: Trying to obtain current memory policy. 00:08:11.874 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:11.874 EAL: Restoring previous memory policy: 4 00:08:11.874 EAL: Calling mem event callback 'spdk:(nil)' 00:08:11.874 EAL: request: mp_malloc_sync 00:08:11.874 EAL: No shared files mode enabled, IPC is disabled 00:08:11.874 EAL: Heap on socket 0 was expanded by 130MB 00:08:12.132 EAL: Calling mem event callback 'spdk:(nil)' 00:08:12.132 EAL: request: mp_malloc_sync 00:08:12.132 EAL: No shared files mode enabled, IPC is disabled 00:08:12.132 EAL: Heap on socket 0 was shrunk by 130MB 00:08:12.132 EAL: Trying to obtain current memory policy. 00:08:12.132 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:12.132 EAL: Restoring previous memory policy: 4 00:08:12.132 EAL: Calling mem event callback 'spdk:(nil)' 00:08:12.132 EAL: request: mp_malloc_sync 00:08:12.132 EAL: No shared files mode enabled, IPC is disabled 00:08:12.132 EAL: Heap on socket 0 was expanded by 258MB 00:08:12.132 EAL: Calling mem event callback 'spdk:(nil)' 00:08:12.132 EAL: request: mp_malloc_sync 00:08:12.132 EAL: No shared files mode enabled, IPC is disabled 00:08:12.132 EAL: Heap on socket 0 was shrunk by 258MB 00:08:12.132 EAL: Trying to obtain current memory policy. 00:08:12.132 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:12.132 EAL: Restoring previous memory policy: 4 00:08:12.132 EAL: Calling mem event callback 'spdk:(nil)' 00:08:12.132 EAL: request: mp_malloc_sync 00:08:12.132 EAL: No shared files mode enabled, IPC is disabled 00:08:12.132 EAL: Heap on socket 0 was expanded by 514MB 00:08:12.132 EAL: Calling mem event callback 'spdk:(nil)' 00:08:12.390 EAL: request: mp_malloc_sync 00:08:12.390 EAL: No shared files mode enabled, IPC is disabled 00:08:12.390 EAL: Heap on socket 0 was shrunk by 514MB 00:08:12.390 EAL: Trying to obtain current memory policy. 00:08:12.390 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:12.390 EAL: Restoring previous memory policy: 4 00:08:12.390 EAL: Calling mem event callback 'spdk:(nil)' 00:08:12.390 EAL: request: mp_malloc_sync 00:08:12.390 EAL: No shared files mode enabled, IPC is disabled 00:08:12.390 EAL: Heap on socket 0 was expanded by 1026MB 00:08:12.390 EAL: Calling mem event callback 'spdk:(nil)' 00:08:12.650 passed 00:08:12.650 00:08:12.650 Run Summary: Type Total Ran Passed Failed Inactive 00:08:12.650 suites 1 1 n/a 0 0 00:08:12.650 tests 2 2 2 0 0 00:08:12.650 asserts 5379 5379 5379 0 n/a 00:08:12.650 00:08:12.650 Elapsed time = 0.632 seconds 00:08:12.650 EAL: request: mp_malloc_sync 00:08:12.650 EAL: No shared files mode enabled, IPC is disabled 00:08:12.650 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:12.650 EAL: Calling mem event callback 'spdk:(nil)' 00:08:12.650 EAL: request: mp_malloc_sync 00:08:12.650 EAL: No shared files mode enabled, IPC is disabled 00:08:12.650 EAL: Heap on socket 0 was shrunk by 2MB 00:08:12.650 EAL: No shared files mode enabled, IPC is disabled 00:08:12.650 EAL: No shared files mode enabled, IPC is disabled 00:08:12.650 EAL: No shared files mode enabled, IPC is disabled 00:08:12.650 00:08:12.650 real 0m0.826s 00:08:12.650 user 0m0.403s 00:08:12.650 sys 0m0.288s 00:08:12.650 06:10:32 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.650 06:10:32 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:08:12.650 ************************************ 00:08:12.650 END TEST env_vtophys 00:08:12.650 ************************************ 00:08:12.650 06:10:32 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:12.650 06:10:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:12.650 06:10:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.650 06:10:32 env -- common/autotest_common.sh@10 -- # set +x 00:08:12.650 ************************************ 00:08:12.650 START TEST env_pci 00:08:12.650 ************************************ 00:08:12.650 06:10:32 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:12.650 00:08:12.650 00:08:12.650 CUnit - A unit testing framework for C - Version 2.1-3 00:08:12.650 http://cunit.sourceforge.net/ 00:08:12.650 00:08:12.650 00:08:12.650 Suite: pci 00:08:12.650 Test: pci_hook ...[2024-11-27 06:10:32.091529] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 71591 has claimed it 00:08:12.650 passed 00:08:12.650 00:08:12.650 Run Summary: Type Total Ran Passed Failed Inactive 00:08:12.650 suites 1 1 n/a 0 0 00:08:12.650 tests 1 1 1 0 0 00:08:12.650 asserts 25 25 25 0 n/a 00:08:12.650 00:08:12.650 Elapsed time = 0.001 seconds 00:08:12.650 EAL: Cannot find device (10000:00:01.0) 00:08:12.650 EAL: Failed to attach device on primary process 00:08:12.650 00:08:12.650 real 0m0.019s 00:08:12.650 user 0m0.007s 00:08:12.650 sys 0m0.011s 00:08:12.650 06:10:32 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.650 ************************************ 00:08:12.650 END TEST env_pci 00:08:12.650 ************************************ 00:08:12.650 06:10:32 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:08:12.650 06:10:32 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:12.650 06:10:32 env -- env/env.sh@15 -- # uname 00:08:12.650 06:10:32 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:12.650 06:10:32 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:12.650 06:10:32 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:12.650 06:10:32 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:12.650 06:10:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.650 06:10:32 env -- common/autotest_common.sh@10 -- # set +x 00:08:12.650 ************************************ 00:08:12.650 START TEST env_dpdk_post_init 00:08:12.650 ************************************ 00:08:12.650 06:10:32 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:12.908 EAL: Detected CPU lcores: 10 00:08:12.908 EAL: Detected NUMA nodes: 1 00:08:12.908 EAL: Detected shared linkage of DPDK 00:08:12.908 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:12.908 EAL: Selected IOVA mode 'PA' 00:08:12.908 Starting DPDK initialization... 00:08:12.908 Starting SPDK post initialization... 00:08:12.908 SPDK NVMe probe 00:08:12.908 Attaching to 0000:00:10.0 00:08:12.908 Attaching to 0000:00:11.0 00:08:12.908 Attached to 0000:00:10.0 00:08:12.908 Attached to 0000:00:11.0 00:08:12.908 Cleaning up... 00:08:12.908 00:08:12.908 real 0m0.175s 00:08:12.908 user 0m0.046s 00:08:12.908 sys 0m0.028s 00:08:12.908 06:10:32 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.908 ************************************ 00:08:12.908 END TEST env_dpdk_post_init 00:08:12.908 ************************************ 00:08:12.908 06:10:32 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:08:12.908 06:10:32 env -- env/env.sh@26 -- # uname 00:08:12.908 06:10:32 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:12.908 06:10:32 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:12.908 06:10:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:12.908 06:10:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.908 06:10:32 env -- common/autotest_common.sh@10 -- # set +x 00:08:12.908 ************************************ 00:08:12.908 START TEST env_mem_callbacks 00:08:12.908 ************************************ 00:08:12.908 06:10:32 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:12.908 EAL: Detected CPU lcores: 10 00:08:12.908 EAL: Detected NUMA nodes: 1 00:08:12.908 EAL: Detected shared linkage of DPDK 00:08:12.908 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:12.908 EAL: Selected IOVA mode 'PA' 00:08:13.165 00:08:13.165 00:08:13.165 CUnit - A unit testing framework for C - Version 2.1-3 00:08:13.165 http://cunit.sourceforge.net/ 00:08:13.165 00:08:13.165 00:08:13.165 Suite: memory 00:08:13.165 Test: test ... 00:08:13.165 register 0x200000200000 2097152 00:08:13.165 malloc 3145728 00:08:13.165 register 0x200000400000 4194304 00:08:13.165 buf 0x200000500000 len 3145728 PASSED 00:08:13.165 malloc 64 00:08:13.165 buf 0x2000004fff40 len 64 PASSED 00:08:13.165 malloc 4194304 00:08:13.165 register 0x200000800000 6291456 00:08:13.165 buf 0x200000a00000 len 4194304 PASSED 00:08:13.165 free 0x200000500000 3145728 00:08:13.165 free 0x2000004fff40 64 00:08:13.165 unregister 0x200000400000 4194304 PASSED 00:08:13.165 free 0x200000a00000 4194304 00:08:13.165 unregister 0x200000800000 6291456 PASSED 00:08:13.165 malloc 8388608 00:08:13.165 register 0x200000400000 10485760 00:08:13.165 buf 0x200000600000 len 8388608 PASSED 00:08:13.165 free 0x200000600000 8388608 00:08:13.165 unregister 0x200000400000 10485760 PASSED 00:08:13.165 passed 00:08:13.165 00:08:13.165 Run Summary: Type Total Ran Passed Failed Inactive 00:08:13.165 suites 1 1 n/a 0 0 00:08:13.165 tests 1 1 1 0 0 00:08:13.165 asserts 15 15 15 0 n/a 00:08:13.165 00:08:13.165 Elapsed time = 0.008 seconds 00:08:13.165 ************************************ 00:08:13.165 END TEST env_mem_callbacks 00:08:13.165 ************************************ 00:08:13.165 00:08:13.165 real 0m0.135s 00:08:13.165 user 0m0.013s 00:08:13.165 sys 0m0.020s 00:08:13.165 06:10:32 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.165 06:10:32 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:08:13.165 ************************************ 00:08:13.165 END TEST env 00:08:13.165 ************************************ 00:08:13.165 00:08:13.165 real 0m1.825s 00:08:13.165 user 0m0.813s 00:08:13.165 sys 0m0.551s 00:08:13.165 06:10:32 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.165 06:10:32 env -- common/autotest_common.sh@10 -- # set +x 00:08:13.165 06:10:32 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:13.165 06:10:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:13.165 06:10:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.165 06:10:32 -- common/autotest_common.sh@10 -- # set +x 00:08:13.165 ************************************ 00:08:13.165 START TEST rpc 00:08:13.165 ************************************ 00:08:13.165 06:10:32 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:13.423 * Looking for test storage... 00:08:13.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:13.423 06:10:32 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:13.423 06:10:32 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:13.423 06:10:32 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:13.423 06:10:32 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:13.423 06:10:32 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:13.423 06:10:32 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:13.423 06:10:32 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:13.423 06:10:32 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:13.423 06:10:32 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:13.423 06:10:32 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:13.423 06:10:32 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:13.423 06:10:32 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:13.423 06:10:32 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:13.423 06:10:32 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:13.423 06:10:32 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:13.423 06:10:32 rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:13.423 06:10:32 rpc -- scripts/common.sh@345 -- # : 1 00:08:13.423 06:10:32 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:13.423 06:10:32 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:13.423 06:10:32 rpc -- scripts/common.sh@365 -- # decimal 1 00:08:13.423 06:10:32 rpc -- scripts/common.sh@353 -- # local d=1 00:08:13.423 06:10:32 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:13.423 06:10:32 rpc -- scripts/common.sh@355 -- # echo 1 00:08:13.423 06:10:32 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:13.423 06:10:32 rpc -- scripts/common.sh@366 -- # decimal 2 00:08:13.423 06:10:32 rpc -- scripts/common.sh@353 -- # local d=2 00:08:13.423 06:10:32 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:13.423 06:10:32 rpc -- scripts/common.sh@355 -- # echo 2 00:08:13.423 06:10:32 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:13.423 06:10:32 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:13.423 06:10:32 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:13.423 06:10:32 rpc -- scripts/common.sh@368 -- # return 0 00:08:13.423 06:10:32 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:13.423 06:10:32 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:13.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.423 --rc genhtml_branch_coverage=1 00:08:13.423 --rc genhtml_function_coverage=1 00:08:13.423 --rc genhtml_legend=1 00:08:13.423 --rc geninfo_all_blocks=1 00:08:13.423 --rc geninfo_unexecuted_blocks=1 00:08:13.423 00:08:13.423 ' 00:08:13.423 06:10:32 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:13.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.423 --rc genhtml_branch_coverage=1 00:08:13.423 --rc genhtml_function_coverage=1 00:08:13.423 --rc genhtml_legend=1 00:08:13.423 --rc geninfo_all_blocks=1 00:08:13.423 --rc geninfo_unexecuted_blocks=1 00:08:13.423 00:08:13.423 ' 00:08:13.423 06:10:32 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:13.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.423 --rc genhtml_branch_coverage=1 00:08:13.423 --rc genhtml_function_coverage=1 00:08:13.423 --rc genhtml_legend=1 00:08:13.423 --rc geninfo_all_blocks=1 00:08:13.423 --rc geninfo_unexecuted_blocks=1 00:08:13.423 00:08:13.423 ' 00:08:13.423 06:10:32 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:13.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.423 --rc genhtml_branch_coverage=1 00:08:13.423 --rc genhtml_function_coverage=1 00:08:13.423 --rc genhtml_legend=1 00:08:13.423 --rc geninfo_all_blocks=1 00:08:13.423 --rc geninfo_unexecuted_blocks=1 00:08:13.423 00:08:13.423 ' 00:08:13.423 06:10:32 rpc -- rpc/rpc.sh@65 -- # spdk_pid=71709 00:08:13.423 06:10:32 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:13.423 06:10:32 rpc -- rpc/rpc.sh@67 -- # waitforlisten 71709 00:08:13.424 06:10:32 rpc -- common/autotest_common.sh@835 -- # '[' -z 71709 ']' 00:08:13.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.424 06:10:32 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.424 06:10:32 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:13.424 06:10:32 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.424 06:10:32 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:13.424 06:10:32 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:08:13.424 06:10:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.424 [2024-11-27 06:10:32.849711] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:08:13.424 [2024-11-27 06:10:32.849778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71709 ] 00:08:13.681 [2024-11-27 06:10:32.964134] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:13.681 [2024-11-27 06:10:32.992805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.681 [2024-11-27 06:10:33.012220] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:13.681 [2024-11-27 06:10:33.012263] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 71709' to capture a snapshot of events at runtime. 00:08:13.681 [2024-11-27 06:10:33.012270] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:13.681 [2024-11-27 06:10:33.012275] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:13.681 [2024-11-27 06:10:33.012279] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid71709 for offline analysis/debug. 00:08:13.681 [2024-11-27 06:10:33.012545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.681 06:10:33 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:13.681 06:10:33 rpc -- common/autotest_common.sh@868 -- # return 0 00:08:13.681 06:10:33 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:13.681 06:10:33 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:13.681 06:10:33 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:13.681 06:10:33 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:13.681 06:10:33 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:13.681 06:10:33 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.681 06:10:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.681 ************************************ 00:08:13.681 START TEST rpc_integrity 00:08:13.681 ************************************ 00:08:13.681 06:10:33 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:08:13.681 06:10:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:13.681 06:10:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.681 06:10:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:13.681 06:10:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.681 06:10:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:13.681 06:10:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:13.939 06:10:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:13.939 06:10:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:13.939 06:10:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.939 06:10:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:13.939 06:10:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.939 06:10:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:13.939 06:10:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:13.939 06:10:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.939 06:10:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:13.939 06:10:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.939 06:10:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:13.939 { 00:08:13.939 "aliases": [ 00:08:13.939 "8c57ba04-a2e8-4fa4-bf90-4a5becc13f78" 00:08:13.939 ], 00:08:13.939 "assigned_rate_limits": { 00:08:13.939 "r_mbytes_per_sec": 0, 00:08:13.939 "rw_ios_per_sec": 0, 00:08:13.939 "rw_mbytes_per_sec": 0, 00:08:13.939 "w_mbytes_per_sec": 0 00:08:13.939 }, 00:08:13.939 "block_size": 512, 00:08:13.939 "claimed": false, 00:08:13.939 "driver_specific": {}, 00:08:13.939 "memory_domains": [ 00:08:13.939 { 00:08:13.939 "dma_device_id": "system", 00:08:13.939 "dma_device_type": 1 00:08:13.939 }, 00:08:13.939 { 00:08:13.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.939 "dma_device_type": 2 00:08:13.939 } 00:08:13.939 ], 00:08:13.939 "name": "Malloc0", 00:08:13.939 "num_blocks": 16384, 00:08:13.939 "product_name": "Malloc disk", 00:08:13.939 "supported_io_types": { 00:08:13.939 "abort": true, 00:08:13.939 "compare": false, 00:08:13.939 "compare_and_write": false, 00:08:13.939 "copy": true, 00:08:13.939 "flush": true, 00:08:13.939 "get_zone_info": false, 00:08:13.939 "nvme_admin": false, 00:08:13.939 "nvme_io": false, 00:08:13.939 "nvme_io_md": false, 00:08:13.939 "nvme_iov_md": false, 00:08:13.939 "read": true, 00:08:13.939 "reset": true, 00:08:13.939 "seek_data": false, 00:08:13.939 "seek_hole": false, 00:08:13.939 "unmap": true, 00:08:13.939 "write": true, 00:08:13.939 "write_zeroes": true, 00:08:13.939 "zcopy": true, 00:08:13.939 "zone_append": false, 00:08:13.939 "zone_management": false 00:08:13.939 }, 00:08:13.939 "uuid": "8c57ba04-a2e8-4fa4-bf90-4a5becc13f78", 00:08:13.939 "zoned": false 00:08:13.939 } 00:08:13.939 ]' 00:08:13.939 06:10:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:13.939 06:10:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:13.939 06:10:33 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:13.939 06:10:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.939 06:10:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:13.939 [2024-11-27 06:10:33.298850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:13.939 [2024-11-27 06:10:33.298895] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.939 [2024-11-27 06:10:33.298924] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15e9840 00:08:13.939 [2024-11-27 06:10:33.298934] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.939 [2024-11-27 06:10:33.300462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.939 [2024-11-27 06:10:33.300495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:13.939 Passthru0 00:08:13.939 06:10:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.939 06:10:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:13.939 06:10:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.939 06:10:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:13.939 06:10:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.939 06:10:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:13.939 { 00:08:13.939 "aliases": [ 00:08:13.939 "8c57ba04-a2e8-4fa4-bf90-4a5becc13f78" 00:08:13.939 ], 00:08:13.939 "assigned_rate_limits": { 00:08:13.939 "r_mbytes_per_sec": 0, 00:08:13.939 "rw_ios_per_sec": 0, 00:08:13.939 "rw_mbytes_per_sec": 0, 00:08:13.939 "w_mbytes_per_sec": 0 00:08:13.939 }, 00:08:13.939 "block_size": 512, 00:08:13.939 "claim_type": "exclusive_write", 00:08:13.939 "claimed": true, 00:08:13.939 "driver_specific": {}, 00:08:13.939 "memory_domains": [ 00:08:13.939 { 00:08:13.939 "dma_device_id": "system", 00:08:13.939 "dma_device_type": 1 00:08:13.939 }, 00:08:13.939 { 00:08:13.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.939 "dma_device_type": 2 00:08:13.939 } 00:08:13.939 ], 00:08:13.939 "name": "Malloc0", 00:08:13.939 "num_blocks": 16384, 00:08:13.939 "product_name": "Malloc disk", 00:08:13.939 "supported_io_types": { 00:08:13.939 "abort": true, 00:08:13.940 "compare": false, 00:08:13.940 "compare_and_write": false, 00:08:13.940 "copy": true, 00:08:13.940 "flush": true, 00:08:13.940 "get_zone_info": false, 00:08:13.940 "nvme_admin": false, 00:08:13.940 "nvme_io": false, 00:08:13.940 "nvme_io_md": false, 00:08:13.940 "nvme_iov_md": false, 00:08:13.940 "read": true, 00:08:13.940 "reset": true, 00:08:13.940 "seek_data": false, 00:08:13.940 "seek_hole": false, 00:08:13.940 "unmap": true, 00:08:13.940 "write": true, 00:08:13.940 "write_zeroes": true, 00:08:13.940 "zcopy": true, 00:08:13.940 "zone_append": false, 00:08:13.940 "zone_management": false 00:08:13.940 }, 00:08:13.940 "uuid": "8c57ba04-a2e8-4fa4-bf90-4a5becc13f78", 00:08:13.940 "zoned": false 00:08:13.940 }, 00:08:13.940 { 00:08:13.940 "aliases": [ 00:08:13.940 "3aba4c83-d2f3-5b33-bee2-a58cac0322b0" 00:08:13.940 ], 00:08:13.940 "assigned_rate_limits": { 00:08:13.940 "r_mbytes_per_sec": 0, 00:08:13.940 "rw_ios_per_sec": 0, 00:08:13.940 "rw_mbytes_per_sec": 0, 00:08:13.940 "w_mbytes_per_sec": 0 00:08:13.940 }, 00:08:13.940 "block_size": 512, 00:08:13.940 "claimed": false, 00:08:13.940 "driver_specific": { 00:08:13.940 "passthru": { 00:08:13.940 "base_bdev_name": "Malloc0", 00:08:13.940 "name": "Passthru0" 00:08:13.940 } 00:08:13.940 }, 00:08:13.940 "memory_domains": [ 00:08:13.940 { 00:08:13.940 "dma_device_id": "system", 00:08:13.940 "dma_device_type": 1 00:08:13.940 }, 00:08:13.940 { 00:08:13.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.940 "dma_device_type": 2 00:08:13.940 } 00:08:13.940 ], 00:08:13.940 "name": "Passthru0", 00:08:13.940 "num_blocks": 16384, 00:08:13.940 "product_name": "passthru", 00:08:13.940 "supported_io_types": { 00:08:13.940 "abort": true, 00:08:13.940 "compare": false, 00:08:13.940 "compare_and_write": false, 00:08:13.940 "copy": true, 00:08:13.940 "flush": true, 00:08:13.940 "get_zone_info": false, 00:08:13.940 "nvme_admin": false, 00:08:13.940 "nvme_io": false, 00:08:13.940 "nvme_io_md": false, 00:08:13.940 "nvme_iov_md": false, 00:08:13.940 "read": true, 00:08:13.940 "reset": true, 00:08:13.940 "seek_data": false, 00:08:13.940 "seek_hole": false, 00:08:13.940 "unmap": true, 00:08:13.940 "write": true, 00:08:13.940 "write_zeroes": true, 00:08:13.940 "zcopy": true, 00:08:13.940 "zone_append": false, 00:08:13.940 "zone_management": false 00:08:13.940 }, 00:08:13.940 "uuid": "3aba4c83-d2f3-5b33-bee2-a58cac0322b0", 00:08:13.940 "zoned": false 00:08:13.940 } 00:08:13.940 ]' 00:08:13.940 06:10:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:13.940 06:10:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:13.940 06:10:33 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:13.940 06:10:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.940 06:10:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:13.940 06:10:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.940 06:10:33 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:13.940 06:10:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.940 06:10:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:13.940 06:10:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.940 06:10:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:13.940 06:10:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.940 06:10:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:13.940 06:10:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.940 06:10:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:13.940 06:10:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:13.940 06:10:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:13.940 00:08:13.940 real 0m0.210s 00:08:13.940 user 0m0.128s 00:08:13.940 sys 0m0.017s 00:08:13.940 06:10:33 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.940 ************************************ 00:08:13.940 END TEST rpc_integrity 00:08:13.940 ************************************ 00:08:13.940 06:10:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:13.940 06:10:33 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:13.940 06:10:33 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:13.940 06:10:33 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.940 06:10:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.210 ************************************ 00:08:14.210 START TEST rpc_plugins 00:08:14.210 ************************************ 00:08:14.210 06:10:33 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:08:14.210 06:10:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:14.210 06:10:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.210 06:10:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:14.210 06:10:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.210 06:10:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:14.210 06:10:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:14.210 06:10:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.210 06:10:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:14.210 06:10:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.210 06:10:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:14.210 { 00:08:14.210 "aliases": [ 00:08:14.210 "53a4e343-9797-403f-9689-4958746f310b" 00:08:14.210 ], 00:08:14.210 "assigned_rate_limits": { 00:08:14.210 "r_mbytes_per_sec": 0, 00:08:14.210 "rw_ios_per_sec": 0, 00:08:14.210 "rw_mbytes_per_sec": 0, 00:08:14.210 "w_mbytes_per_sec": 0 00:08:14.210 }, 00:08:14.210 "block_size": 4096, 00:08:14.210 "claimed": false, 00:08:14.210 "driver_specific": {}, 00:08:14.210 "memory_domains": [ 00:08:14.210 { 00:08:14.210 "dma_device_id": "system", 00:08:14.211 "dma_device_type": 1 00:08:14.211 }, 00:08:14.211 { 00:08:14.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.211 "dma_device_type": 2 00:08:14.211 } 00:08:14.211 ], 00:08:14.211 "name": "Malloc1", 00:08:14.211 "num_blocks": 256, 00:08:14.211 "product_name": "Malloc disk", 00:08:14.211 "supported_io_types": { 00:08:14.211 "abort": true, 00:08:14.211 "compare": false, 00:08:14.211 "compare_and_write": false, 00:08:14.211 "copy": true, 00:08:14.211 "flush": true, 00:08:14.211 "get_zone_info": false, 00:08:14.211 "nvme_admin": false, 00:08:14.211 "nvme_io": false, 00:08:14.211 "nvme_io_md": false, 00:08:14.211 "nvme_iov_md": false, 00:08:14.211 "read": true, 00:08:14.211 "reset": true, 00:08:14.211 "seek_data": false, 00:08:14.211 "seek_hole": false, 00:08:14.211 "unmap": true, 00:08:14.211 "write": true, 00:08:14.211 "write_zeroes": true, 00:08:14.211 "zcopy": true, 00:08:14.211 "zone_append": false, 00:08:14.211 "zone_management": false 00:08:14.211 }, 00:08:14.211 "uuid": "53a4e343-9797-403f-9689-4958746f310b", 00:08:14.211 "zoned": false 00:08:14.211 } 00:08:14.211 ]' 00:08:14.211 06:10:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:08:14.211 06:10:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:14.211 06:10:33 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:14.211 06:10:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.211 06:10:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:14.211 06:10:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.211 06:10:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:14.211 06:10:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.211 06:10:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:14.211 06:10:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.211 06:10:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:14.211 06:10:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:08:14.211 06:10:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:14.211 00:08:14.211 real 0m0.126s 00:08:14.211 user 0m0.066s 00:08:14.211 sys 0m0.019s 00:08:14.211 ************************************ 00:08:14.211 END TEST rpc_plugins 00:08:14.211 ************************************ 00:08:14.211 06:10:33 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.211 06:10:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:14.211 06:10:33 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:14.211 06:10:33 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:14.211 06:10:33 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.211 06:10:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.211 ************************************ 00:08:14.211 START TEST rpc_trace_cmd_test 00:08:14.211 ************************************ 00:08:14.211 06:10:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:08:14.211 06:10:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:08:14.211 06:10:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:14.211 06:10:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.211 06:10:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.211 06:10:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.211 06:10:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:08:14.211 "bdev": { 00:08:14.211 "mask": "0x8", 00:08:14.211 "tpoint_mask": "0xffffffffffffffff" 00:08:14.211 }, 00:08:14.211 "bdev_nvme": { 00:08:14.211 "mask": "0x4000", 00:08:14.211 "tpoint_mask": "0x0" 00:08:14.211 }, 00:08:14.211 "bdev_raid": { 00:08:14.211 "mask": "0x20000", 00:08:14.211 "tpoint_mask": "0x0" 00:08:14.211 }, 00:08:14.211 "blob": { 00:08:14.211 "mask": "0x10000", 00:08:14.211 "tpoint_mask": "0x0" 00:08:14.211 }, 00:08:14.211 "blobfs": { 00:08:14.211 "mask": "0x80", 00:08:14.211 "tpoint_mask": "0x0" 00:08:14.211 }, 00:08:14.211 "dsa": { 00:08:14.211 "mask": "0x200", 00:08:14.211 "tpoint_mask": "0x0" 00:08:14.211 }, 00:08:14.211 "ftl": { 00:08:14.211 "mask": "0x40", 00:08:14.211 "tpoint_mask": "0x0" 00:08:14.211 }, 00:08:14.211 "iaa": { 00:08:14.211 "mask": "0x1000", 00:08:14.211 "tpoint_mask": "0x0" 00:08:14.211 }, 00:08:14.211 "iscsi_conn": { 00:08:14.211 "mask": "0x2", 00:08:14.211 "tpoint_mask": "0x0" 00:08:14.211 }, 00:08:14.211 "nvme_pcie": { 00:08:14.211 "mask": "0x800", 00:08:14.211 "tpoint_mask": "0x0" 00:08:14.211 }, 00:08:14.211 "nvme_tcp": { 00:08:14.211 "mask": "0x2000", 00:08:14.211 "tpoint_mask": "0x0" 00:08:14.211 }, 00:08:14.211 "nvmf_rdma": { 00:08:14.211 "mask": "0x10", 00:08:14.211 "tpoint_mask": "0x0" 00:08:14.211 }, 00:08:14.211 "nvmf_tcp": { 00:08:14.211 "mask": "0x20", 00:08:14.211 "tpoint_mask": "0x0" 00:08:14.211 }, 00:08:14.211 "scheduler": { 00:08:14.211 "mask": "0x40000", 00:08:14.211 "tpoint_mask": "0x0" 00:08:14.211 }, 00:08:14.211 "scsi": { 00:08:14.211 "mask": "0x4", 00:08:14.211 "tpoint_mask": "0x0" 00:08:14.211 }, 00:08:14.211 "sock": { 00:08:14.211 "mask": "0x8000", 00:08:14.211 "tpoint_mask": "0x0" 00:08:14.211 }, 00:08:14.211 "thread": { 00:08:14.211 "mask": "0x400", 00:08:14.211 "tpoint_mask": "0x0" 00:08:14.211 }, 00:08:14.211 "tpoint_group_mask": "0x8", 00:08:14.211 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid71709" 00:08:14.211 }' 00:08:14.211 06:10:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:08:14.211 06:10:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:08:14.211 06:10:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:14.468 06:10:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:14.468 06:10:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:14.468 06:10:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:14.468 06:10:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:14.468 06:10:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:14.468 06:10:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:14.468 06:10:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:14.468 00:08:14.468 real 0m0.179s 00:08:14.468 user 0m0.145s 00:08:14.468 sys 0m0.022s 00:08:14.468 06:10:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.468 06:10:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.468 ************************************ 00:08:14.468 END TEST rpc_trace_cmd_test 00:08:14.468 ************************************ 00:08:14.468 06:10:33 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:08:14.468 06:10:33 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:08:14.468 06:10:33 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:14.468 06:10:33 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.468 06:10:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.468 ************************************ 00:08:14.468 START TEST go_rpc 00:08:14.468 ************************************ 00:08:14.468 06:10:33 rpc.go_rpc -- common/autotest_common.sh@1129 -- # go_rpc 00:08:14.468 06:10:33 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:08:14.468 06:10:33 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:08:14.468 06:10:33 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:08:14.468 06:10:33 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:08:14.468 06:10:33 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:08:14.468 06:10:33 rpc.go_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.468 06:10:33 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.468 06:10:33 rpc.go_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.468 06:10:33 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:08:14.468 06:10:33 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:08:14.468 06:10:33 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["b0a53b92-7bae-4f8d-9f4e-4a32280620d9"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"b0a53b92-7bae-4f8d-9f4e-4a32280620d9","zoned":false}]' 00:08:14.468 06:10:33 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:08:14.725 06:10:34 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:08:14.725 06:10:34 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:14.725 06:10:34 rpc.go_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.725 06:10:34 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.725 06:10:34 rpc.go_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.725 06:10:34 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:08:14.725 06:10:34 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:08:14.725 06:10:34 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:08:14.725 06:10:34 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:08:14.725 00:08:14.725 real 0m0.161s 00:08:14.725 user 0m0.093s 00:08:14.725 sys 0m0.032s 00:08:14.725 ************************************ 00:08:14.725 END TEST go_rpc 00:08:14.725 ************************************ 00:08:14.725 06:10:34 rpc.go_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.725 06:10:34 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.725 06:10:34 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:14.725 06:10:34 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:14.725 06:10:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:14.725 06:10:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.725 06:10:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.725 ************************************ 00:08:14.725 START TEST rpc_daemon_integrity 00:08:14.725 ************************************ 00:08:14.725 06:10:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:08:14.726 06:10:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:14.726 06:10:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.726 06:10:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:14.726 06:10:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.726 06:10:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:14.726 06:10:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:14.726 06:10:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:14.726 06:10:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:14.726 06:10:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.726 06:10:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:14.726 06:10:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.726 06:10:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:08:14.726 06:10:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:14.726 06:10:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.726 06:10:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:14.726 06:10:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.726 06:10:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:14.726 { 00:08:14.726 "aliases": [ 00:08:14.726 "2260b044-dd1e-438a-9200-b16784b5bc74" 00:08:14.726 ], 00:08:14.726 "assigned_rate_limits": { 00:08:14.726 "r_mbytes_per_sec": 0, 00:08:14.726 "rw_ios_per_sec": 0, 00:08:14.726 "rw_mbytes_per_sec": 0, 00:08:14.726 "w_mbytes_per_sec": 0 00:08:14.726 }, 00:08:14.726 "block_size": 512, 00:08:14.726 "claimed": false, 00:08:14.726 "driver_specific": {}, 00:08:14.726 "memory_domains": [ 00:08:14.726 { 00:08:14.726 "dma_device_id": "system", 00:08:14.726 "dma_device_type": 1 00:08:14.726 }, 00:08:14.726 { 00:08:14.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.726 "dma_device_type": 2 00:08:14.726 } 00:08:14.726 ], 00:08:14.726 "name": "Malloc3", 00:08:14.726 "num_blocks": 16384, 00:08:14.726 "product_name": "Malloc disk", 00:08:14.726 "supported_io_types": { 00:08:14.726 "abort": true, 00:08:14.726 "compare": false, 00:08:14.726 "compare_and_write": false, 00:08:14.726 "copy": true, 00:08:14.726 "flush": true, 00:08:14.726 "get_zone_info": false, 00:08:14.726 "nvme_admin": false, 00:08:14.726 "nvme_io": false, 00:08:14.726 "nvme_io_md": false, 00:08:14.726 "nvme_iov_md": false, 00:08:14.726 "read": true, 00:08:14.726 "reset": true, 00:08:14.726 "seek_data": false, 00:08:14.726 "seek_hole": false, 00:08:14.726 "unmap": true, 00:08:14.726 "write": true, 00:08:14.726 "write_zeroes": true, 00:08:14.726 "zcopy": true, 00:08:14.726 "zone_append": false, 00:08:14.726 "zone_management": false 00:08:14.726 }, 00:08:14.726 "uuid": "2260b044-dd1e-438a-9200-b16784b5bc74", 00:08:14.726 "zoned": false 00:08:14.726 } 00:08:14.726 ]' 00:08:14.726 06:10:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:14.726 06:10:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:14.726 06:10:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:08:14.726 06:10:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.726 06:10:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:14.726 [2024-11-27 06:10:34.243185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:14.726 [2024-11-27 06:10:34.243224] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.726 [2024-11-27 06:10:34.243236] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15eafd0 00:08:14.726 [2024-11-27 06:10:34.243242] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.726 [2024-11-27 06:10:34.244590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.726 [2024-11-27 06:10:34.244620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:14.726 Passthru0 00:08:14.726 06:10:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.726 06:10:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:14.726 06:10:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.726 06:10:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:14.983 06:10:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.983 06:10:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:14.983 { 00:08:14.983 "aliases": [ 00:08:14.983 "2260b044-dd1e-438a-9200-b16784b5bc74" 00:08:14.983 ], 00:08:14.983 "assigned_rate_limits": { 00:08:14.983 "r_mbytes_per_sec": 0, 00:08:14.983 "rw_ios_per_sec": 0, 00:08:14.983 "rw_mbytes_per_sec": 0, 00:08:14.983 "w_mbytes_per_sec": 0 00:08:14.983 }, 00:08:14.983 "block_size": 512, 00:08:14.983 "claim_type": "exclusive_write", 00:08:14.983 "claimed": true, 00:08:14.983 "driver_specific": {}, 00:08:14.983 "memory_domains": [ 00:08:14.983 { 00:08:14.983 "dma_device_id": "system", 00:08:14.983 "dma_device_type": 1 00:08:14.983 }, 00:08:14.983 { 00:08:14.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.983 "dma_device_type": 2 00:08:14.983 } 00:08:14.983 ], 00:08:14.983 "name": "Malloc3", 00:08:14.983 "num_blocks": 16384, 00:08:14.983 "product_name": "Malloc disk", 00:08:14.983 "supported_io_types": { 00:08:14.983 "abort": true, 00:08:14.983 "compare": false, 00:08:14.983 "compare_and_write": false, 00:08:14.983 "copy": true, 00:08:14.983 "flush": true, 00:08:14.983 "get_zone_info": false, 00:08:14.983 "nvme_admin": false, 00:08:14.983 "nvme_io": false, 00:08:14.983 "nvme_io_md": false, 00:08:14.983 "nvme_iov_md": false, 00:08:14.983 "read": true, 00:08:14.983 "reset": true, 00:08:14.983 "seek_data": false, 00:08:14.983 "seek_hole": false, 00:08:14.983 "unmap": true, 00:08:14.983 "write": true, 00:08:14.983 "write_zeroes": true, 00:08:14.983 "zcopy": true, 00:08:14.983 "zone_append": false, 00:08:14.983 "zone_management": false 00:08:14.983 }, 00:08:14.983 "uuid": "2260b044-dd1e-438a-9200-b16784b5bc74", 00:08:14.983 "zoned": false 00:08:14.983 }, 00:08:14.983 { 00:08:14.983 "aliases": [ 00:08:14.983 "b75a12b4-ae35-593b-befc-b33e7a493f5a" 00:08:14.983 ], 00:08:14.983 "assigned_rate_limits": { 00:08:14.983 "r_mbytes_per_sec": 0, 00:08:14.983 "rw_ios_per_sec": 0, 00:08:14.983 "rw_mbytes_per_sec": 0, 00:08:14.983 "w_mbytes_per_sec": 0 00:08:14.983 }, 00:08:14.983 "block_size": 512, 00:08:14.983 "claimed": false, 00:08:14.983 "driver_specific": { 00:08:14.983 "passthru": { 00:08:14.983 "base_bdev_name": "Malloc3", 00:08:14.983 "name": "Passthru0" 00:08:14.983 } 00:08:14.983 }, 00:08:14.983 "memory_domains": [ 00:08:14.983 { 00:08:14.983 "dma_device_id": "system", 00:08:14.983 "dma_device_type": 1 00:08:14.983 }, 00:08:14.983 { 00:08:14.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.983 "dma_device_type": 2 00:08:14.983 } 00:08:14.983 ], 00:08:14.983 "name": "Passthru0", 00:08:14.983 "num_blocks": 16384, 00:08:14.983 "product_name": "passthru", 00:08:14.983 "supported_io_types": { 00:08:14.983 "abort": true, 00:08:14.983 "compare": false, 00:08:14.983 "compare_and_write": false, 00:08:14.983 "copy": true, 00:08:14.983 "flush": true, 00:08:14.983 "get_zone_info": false, 00:08:14.983 "nvme_admin": false, 00:08:14.983 "nvme_io": false, 00:08:14.983 "nvme_io_md": false, 00:08:14.983 "nvme_iov_md": false, 00:08:14.983 "read": true, 00:08:14.983 "reset": true, 00:08:14.983 "seek_data": false, 00:08:14.983 "seek_hole": false, 00:08:14.983 "unmap": true, 00:08:14.983 "write": true, 00:08:14.983 "write_zeroes": true, 00:08:14.983 "zcopy": true, 00:08:14.983 "zone_append": false, 00:08:14.983 "zone_management": false 00:08:14.983 }, 00:08:14.983 "uuid": "b75a12b4-ae35-593b-befc-b33e7a493f5a", 00:08:14.983 "zoned": false 00:08:14.983 } 00:08:14.983 ]' 00:08:14.983 06:10:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:14.983 06:10:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:14.983 06:10:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:14.983 06:10:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.983 06:10:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:14.983 06:10:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.983 06:10:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:08:14.983 06:10:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.983 06:10:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:14.983 06:10:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.983 06:10:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:14.983 06:10:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.983 06:10:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:14.983 06:10:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.983 06:10:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:14.983 06:10:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:14.984 06:10:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:14.984 00:08:14.984 real 0m0.237s 00:08:14.984 user 0m0.132s 00:08:14.984 sys 0m0.038s 00:08:14.984 ************************************ 00:08:14.984 END TEST rpc_daemon_integrity 00:08:14.984 ************************************ 00:08:14.984 06:10:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.984 06:10:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:14.984 06:10:34 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:14.984 06:10:34 rpc -- rpc/rpc.sh@84 -- # killprocess 71709 00:08:14.984 06:10:34 rpc -- common/autotest_common.sh@954 -- # '[' -z 71709 ']' 00:08:14.984 06:10:34 rpc -- common/autotest_common.sh@958 -- # kill -0 71709 00:08:14.984 06:10:34 rpc -- common/autotest_common.sh@959 -- # uname 00:08:14.984 06:10:34 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:14.984 06:10:34 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71709 00:08:14.984 06:10:34 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:14.984 06:10:34 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:14.984 killing process with pid 71709 00:08:14.984 06:10:34 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71709' 00:08:14.984 06:10:34 rpc -- common/autotest_common.sh@973 -- # kill 71709 00:08:14.984 06:10:34 rpc -- common/autotest_common.sh@978 -- # wait 71709 00:08:15.241 00:08:15.241 real 0m1.989s 00:08:15.241 user 0m2.523s 00:08:15.241 sys 0m0.550s 00:08:15.241 06:10:34 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.241 ************************************ 00:08:15.241 END TEST rpc 00:08:15.241 ************************************ 00:08:15.241 06:10:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.241 06:10:34 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:15.241 06:10:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:15.241 06:10:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.241 06:10:34 -- common/autotest_common.sh@10 -- # set +x 00:08:15.241 ************************************ 00:08:15.241 START TEST skip_rpc 00:08:15.241 ************************************ 00:08:15.241 06:10:34 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:15.241 * Looking for test storage... 00:08:15.498 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:15.498 06:10:34 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:15.498 06:10:34 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:15.498 06:10:34 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:15.498 06:10:34 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:15.498 06:10:34 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:15.498 06:10:34 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:15.498 06:10:34 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:15.498 06:10:34 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:15.498 06:10:34 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:15.498 06:10:34 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:15.498 06:10:34 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:15.498 06:10:34 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:15.498 06:10:34 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:15.498 06:10:34 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:15.498 06:10:34 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:15.498 06:10:34 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:15.498 06:10:34 skip_rpc -- scripts/common.sh@345 -- # : 1 00:08:15.498 06:10:34 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:15.498 06:10:34 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:15.498 06:10:34 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:15.498 06:10:34 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:08:15.498 06:10:34 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:15.498 06:10:34 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:08:15.498 06:10:34 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:15.498 06:10:34 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:15.498 06:10:34 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:08:15.498 06:10:34 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:15.498 06:10:34 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:08:15.498 06:10:34 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:15.499 06:10:34 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:15.499 06:10:34 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:15.499 06:10:34 skip_rpc -- scripts/common.sh@368 -- # return 0 00:08:15.499 06:10:34 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:15.499 06:10:34 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:15.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.499 --rc genhtml_branch_coverage=1 00:08:15.499 --rc genhtml_function_coverage=1 00:08:15.499 --rc genhtml_legend=1 00:08:15.499 --rc geninfo_all_blocks=1 00:08:15.499 --rc geninfo_unexecuted_blocks=1 00:08:15.499 00:08:15.499 ' 00:08:15.499 06:10:34 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:15.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.499 --rc genhtml_branch_coverage=1 00:08:15.499 --rc genhtml_function_coverage=1 00:08:15.499 --rc genhtml_legend=1 00:08:15.499 --rc geninfo_all_blocks=1 00:08:15.499 --rc geninfo_unexecuted_blocks=1 00:08:15.499 00:08:15.499 ' 00:08:15.499 06:10:34 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:15.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.499 --rc genhtml_branch_coverage=1 00:08:15.499 --rc genhtml_function_coverage=1 00:08:15.499 --rc genhtml_legend=1 00:08:15.499 --rc geninfo_all_blocks=1 00:08:15.499 --rc geninfo_unexecuted_blocks=1 00:08:15.499 00:08:15.499 ' 00:08:15.499 06:10:34 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:15.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.499 --rc genhtml_branch_coverage=1 00:08:15.499 --rc genhtml_function_coverage=1 00:08:15.499 --rc genhtml_legend=1 00:08:15.499 --rc geninfo_all_blocks=1 00:08:15.499 --rc geninfo_unexecuted_blocks=1 00:08:15.499 00:08:15.499 ' 00:08:15.499 06:10:34 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:15.499 06:10:34 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:15.499 06:10:34 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:15.499 06:10:34 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:15.499 06:10:34 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.499 06:10:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.499 ************************************ 00:08:15.499 START TEST skip_rpc 00:08:15.499 ************************************ 00:08:15.499 06:10:34 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:08:15.499 06:10:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=71959 00:08:15.499 06:10:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:15.499 06:10:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:15.499 06:10:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:15.499 [2024-11-27 06:10:34.909993] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:08:15.499 [2024-11-27 06:10:34.910056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71959 ] 00:08:15.499 [2024-11-27 06:10:35.024042] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:15.756 [2024-11-27 06:10:35.053057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.756 [2024-11-27 06:10:35.073071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.018 06:10:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:08:21.018 06:10:39 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:08:21.018 06:10:39 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:08:21.018 06:10:39 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:21.019 06:10:39 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:21.019 06:10:39 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:21.019 06:10:39 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:21.019 06:10:39 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:08:21.019 06:10:39 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.019 06:10:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.019 2024/11/27 06:10:39 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:08:21.019 06:10:39 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:21.019 06:10:39 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:21.019 06:10:39 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:21.019 06:10:39 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:21.019 06:10:39 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:21.019 06:10:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:08:21.019 06:10:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 71959 00:08:21.019 06:10:39 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 71959 ']' 00:08:21.019 06:10:39 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 71959 00:08:21.019 06:10:39 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:08:21.019 06:10:39 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:21.019 06:10:39 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71959 00:08:21.019 killing process with pid 71959 00:08:21.019 06:10:39 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:21.019 06:10:39 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:21.019 06:10:39 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71959' 00:08:21.019 06:10:39 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 71959 00:08:21.019 06:10:39 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 71959 00:08:21.019 00:08:21.019 real 0m5.222s 00:08:21.019 user 0m4.951s 00:08:21.019 sys 0m0.164s 00:08:21.019 06:10:40 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.019 06:10:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.019 ************************************ 00:08:21.019 END TEST skip_rpc 00:08:21.019 ************************************ 00:08:21.019 06:10:40 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:21.019 06:10:40 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:21.019 06:10:40 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.019 06:10:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.019 ************************************ 00:08:21.019 START TEST skip_rpc_with_json 00:08:21.019 ************************************ 00:08:21.019 06:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:08:21.019 06:10:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:21.019 06:10:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=72051 00:08:21.019 06:10:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:21.019 06:10:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 72051 00:08:21.019 06:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 72051 ']' 00:08:21.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.019 06:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.019 06:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.019 06:10:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:21.019 06:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.019 06:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.019 06:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:21.019 [2024-11-27 06:10:40.181302] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:08:21.019 [2024-11-27 06:10:40.181365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72051 ] 00:08:21.019 [2024-11-27 06:10:40.292272] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:21.019 [2024-11-27 06:10:40.310897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.019 [2024-11-27 06:10:40.329053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.019 06:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:21.019 06:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:08:21.019 06:10:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:08:21.019 06:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.019 06:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:21.019 [2024-11-27 06:10:40.490233] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:08:21.019 2024/11/27 06:10:40 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:08:21.019 request: 00:08:21.019 { 00:08:21.019 "method": "nvmf_get_transports", 00:08:21.019 "params": { 00:08:21.019 "trtype": "tcp" 00:08:21.019 } 00:08:21.019 } 00:08:21.019 Got JSON-RPC error response 00:08:21.019 GoRPCClient: error on JSON-RPC call 00:08:21.019 06:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:21.019 06:10:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:08:21.019 06:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.019 06:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:21.019 [2024-11-27 06:10:40.498302] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:21.019 06:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.019 06:10:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:08:21.019 06:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.019 06:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:21.278 06:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.278 06:10:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:21.278 { 00:08:21.278 "subsystems": [ 00:08:21.278 { 00:08:21.278 "subsystem": "fsdev", 00:08:21.278 "config": [ 00:08:21.278 { 00:08:21.278 "method": "fsdev_set_opts", 00:08:21.278 "params": { 00:08:21.278 "fsdev_io_cache_size": 256, 00:08:21.278 "fsdev_io_pool_size": 65535 00:08:21.278 } 00:08:21.278 } 00:08:21.278 ] 00:08:21.278 }, 00:08:21.278 { 00:08:21.278 "subsystem": "keyring", 00:08:21.278 "config": [] 00:08:21.278 }, 00:08:21.278 { 00:08:21.278 "subsystem": "iobuf", 00:08:21.278 "config": [ 00:08:21.278 { 00:08:21.278 "method": "iobuf_set_options", 00:08:21.278 "params": { 00:08:21.278 "enable_numa": false, 00:08:21.278 "large_bufsize": 135168, 00:08:21.278 "large_pool_count": 1024, 00:08:21.278 "small_bufsize": 8192, 00:08:21.278 "small_pool_count": 8192 00:08:21.278 } 00:08:21.278 } 00:08:21.278 ] 00:08:21.278 }, 00:08:21.278 { 00:08:21.278 "subsystem": "sock", 00:08:21.278 "config": [ 00:08:21.278 { 00:08:21.278 "method": "sock_set_default_impl", 00:08:21.278 "params": { 00:08:21.278 "impl_name": "posix" 00:08:21.278 } 00:08:21.278 }, 00:08:21.278 { 00:08:21.278 "method": "sock_impl_set_options", 00:08:21.278 "params": { 00:08:21.278 "enable_ktls": false, 00:08:21.278 "enable_placement_id": 0, 00:08:21.278 "enable_quickack": false, 00:08:21.278 "enable_recv_pipe": true, 00:08:21.278 "enable_zerocopy_send_client": false, 00:08:21.278 "enable_zerocopy_send_server": true, 00:08:21.278 "impl_name": "ssl", 00:08:21.278 "recv_buf_size": 4096, 00:08:21.278 "send_buf_size": 4096, 00:08:21.278 "tls_version": 0, 00:08:21.278 "zerocopy_threshold": 0 00:08:21.278 } 00:08:21.278 }, 00:08:21.278 { 00:08:21.278 "method": "sock_impl_set_options", 00:08:21.278 "params": { 00:08:21.278 "enable_ktls": false, 00:08:21.278 "enable_placement_id": 0, 00:08:21.278 "enable_quickack": false, 00:08:21.278 "enable_recv_pipe": true, 00:08:21.278 "enable_zerocopy_send_client": false, 00:08:21.278 "enable_zerocopy_send_server": true, 00:08:21.278 "impl_name": "posix", 00:08:21.278 "recv_buf_size": 2097152, 00:08:21.278 "send_buf_size": 2097152, 00:08:21.278 "tls_version": 0, 00:08:21.278 "zerocopy_threshold": 0 00:08:21.278 } 00:08:21.278 } 00:08:21.278 ] 00:08:21.278 }, 00:08:21.278 { 00:08:21.278 "subsystem": "vmd", 00:08:21.278 "config": [] 00:08:21.278 }, 00:08:21.278 { 00:08:21.278 "subsystem": "accel", 00:08:21.278 "config": [ 00:08:21.278 { 00:08:21.278 "method": "accel_set_options", 00:08:21.278 "params": { 00:08:21.278 "buf_count": 2048, 00:08:21.278 "large_cache_size": 16, 00:08:21.278 "sequence_count": 2048, 00:08:21.278 "small_cache_size": 128, 00:08:21.278 "task_count": 2048 00:08:21.278 } 00:08:21.278 } 00:08:21.278 ] 00:08:21.278 }, 00:08:21.278 { 00:08:21.278 "subsystem": "bdev", 00:08:21.278 "config": [ 00:08:21.278 { 00:08:21.278 "method": "bdev_set_options", 00:08:21.278 "params": { 00:08:21.278 "bdev_auto_examine": true, 00:08:21.278 "bdev_io_cache_size": 256, 00:08:21.278 "bdev_io_pool_size": 65535, 00:08:21.278 "iobuf_large_cache_size": 16, 00:08:21.278 "iobuf_small_cache_size": 128 00:08:21.278 } 00:08:21.278 }, 00:08:21.278 { 00:08:21.278 "method": "bdev_raid_set_options", 00:08:21.278 "params": { 00:08:21.278 "process_max_bandwidth_mb_sec": 0, 00:08:21.278 "process_window_size_kb": 1024 00:08:21.278 } 00:08:21.278 }, 00:08:21.278 { 00:08:21.278 "method": "bdev_iscsi_set_options", 00:08:21.278 "params": { 00:08:21.278 "timeout_sec": 30 00:08:21.278 } 00:08:21.278 }, 00:08:21.278 { 00:08:21.278 "method": "bdev_nvme_set_options", 00:08:21.278 "params": { 00:08:21.278 "action_on_timeout": "none", 00:08:21.278 "allow_accel_sequence": false, 00:08:21.278 "arbitration_burst": 0, 00:08:21.278 "bdev_retry_count": 3, 00:08:21.278 "ctrlr_loss_timeout_sec": 0, 00:08:21.278 "delay_cmd_submit": true, 00:08:21.278 "dhchap_dhgroups": [ 00:08:21.278 "null", 00:08:21.278 "ffdhe2048", 00:08:21.278 "ffdhe3072", 00:08:21.278 "ffdhe4096", 00:08:21.278 "ffdhe6144", 00:08:21.278 "ffdhe8192" 00:08:21.278 ], 00:08:21.278 "dhchap_digests": [ 00:08:21.278 "sha256", 00:08:21.278 "sha384", 00:08:21.278 "sha512" 00:08:21.278 ], 00:08:21.278 "disable_auto_failback": false, 00:08:21.278 "fast_io_fail_timeout_sec": 0, 00:08:21.278 "generate_uuids": false, 00:08:21.278 "high_priority_weight": 0, 00:08:21.278 "io_path_stat": false, 00:08:21.278 "io_queue_requests": 0, 00:08:21.278 "keep_alive_timeout_ms": 10000, 00:08:21.278 "low_priority_weight": 0, 00:08:21.278 "medium_priority_weight": 0, 00:08:21.278 "nvme_adminq_poll_period_us": 10000, 00:08:21.278 "nvme_error_stat": false, 00:08:21.278 "nvme_ioq_poll_period_us": 0, 00:08:21.278 "rdma_cm_event_timeout_ms": 0, 00:08:21.278 "rdma_max_cq_size": 0, 00:08:21.278 "rdma_srq_size": 0, 00:08:21.278 "reconnect_delay_sec": 0, 00:08:21.278 "timeout_admin_us": 0, 00:08:21.278 "timeout_us": 0, 00:08:21.278 "transport_ack_timeout": 0, 00:08:21.278 "transport_retry_count": 4, 00:08:21.278 "transport_tos": 0 00:08:21.278 } 00:08:21.278 }, 00:08:21.278 { 00:08:21.278 "method": "bdev_nvme_set_hotplug", 00:08:21.278 "params": { 00:08:21.278 "enable": false, 00:08:21.278 "period_us": 100000 00:08:21.278 } 00:08:21.278 }, 00:08:21.278 { 00:08:21.278 "method": "bdev_wait_for_examine" 00:08:21.278 } 00:08:21.278 ] 00:08:21.278 }, 00:08:21.278 { 00:08:21.278 "subsystem": "scsi", 00:08:21.278 "config": null 00:08:21.278 }, 00:08:21.278 { 00:08:21.278 "subsystem": "scheduler", 00:08:21.278 "config": [ 00:08:21.278 { 00:08:21.278 "method": "framework_set_scheduler", 00:08:21.278 "params": { 00:08:21.278 "name": "static" 00:08:21.278 } 00:08:21.278 } 00:08:21.278 ] 00:08:21.278 }, 00:08:21.278 { 00:08:21.278 "subsystem": "vhost_scsi", 00:08:21.278 "config": [] 00:08:21.278 }, 00:08:21.278 { 00:08:21.278 "subsystem": "vhost_blk", 00:08:21.278 "config": [] 00:08:21.278 }, 00:08:21.278 { 00:08:21.278 "subsystem": "ublk", 00:08:21.278 "config": [] 00:08:21.278 }, 00:08:21.278 { 00:08:21.278 "subsystem": "nbd", 00:08:21.278 "config": [] 00:08:21.278 }, 00:08:21.278 { 00:08:21.278 "subsystem": "nvmf", 00:08:21.278 "config": [ 00:08:21.278 { 00:08:21.278 "method": "nvmf_set_config", 00:08:21.278 "params": { 00:08:21.278 "admin_cmd_passthru": { 00:08:21.278 "identify_ctrlr": false 00:08:21.278 }, 00:08:21.278 "dhchap_dhgroups": [ 00:08:21.278 "null", 00:08:21.278 "ffdhe2048", 00:08:21.278 "ffdhe3072", 00:08:21.278 "ffdhe4096", 00:08:21.278 "ffdhe6144", 00:08:21.278 "ffdhe8192" 00:08:21.278 ], 00:08:21.278 "dhchap_digests": [ 00:08:21.278 "sha256", 00:08:21.278 "sha384", 00:08:21.278 "sha512" 00:08:21.278 ], 00:08:21.278 "discovery_filter": "match_any" 00:08:21.278 } 00:08:21.278 }, 00:08:21.278 { 00:08:21.278 "method": "nvmf_set_max_subsystems", 00:08:21.278 "params": { 00:08:21.278 "max_subsystems": 1024 00:08:21.278 } 00:08:21.278 }, 00:08:21.278 { 00:08:21.278 "method": "nvmf_set_crdt", 00:08:21.278 "params": { 00:08:21.278 "crdt1": 0, 00:08:21.278 "crdt2": 0, 00:08:21.278 "crdt3": 0 00:08:21.278 } 00:08:21.278 }, 00:08:21.278 { 00:08:21.278 "method": "nvmf_create_transport", 00:08:21.278 "params": { 00:08:21.279 "abort_timeout_sec": 1, 00:08:21.279 "ack_timeout": 0, 00:08:21.279 "buf_cache_size": 4294967295, 00:08:21.279 "c2h_success": true, 00:08:21.279 "data_wr_pool_size": 0, 00:08:21.279 "dif_insert_or_strip": false, 00:08:21.279 "in_capsule_data_size": 4096, 00:08:21.279 "io_unit_size": 131072, 00:08:21.279 "max_aq_depth": 128, 00:08:21.279 "max_io_qpairs_per_ctrlr": 127, 00:08:21.279 "max_io_size": 131072, 00:08:21.279 "max_queue_depth": 128, 00:08:21.279 "num_shared_buffers": 511, 00:08:21.279 "sock_priority": 0, 00:08:21.279 "trtype": "TCP", 00:08:21.279 "zcopy": false 00:08:21.279 } 00:08:21.279 } 00:08:21.279 ] 00:08:21.279 }, 00:08:21.279 { 00:08:21.279 "subsystem": "iscsi", 00:08:21.279 "config": [ 00:08:21.279 { 00:08:21.279 "method": "iscsi_set_options", 00:08:21.279 "params": { 00:08:21.279 "allow_duplicated_isid": false, 00:08:21.279 "chap_group": 0, 00:08:21.279 "data_out_pool_size": 2048, 00:08:21.279 "default_time2retain": 20, 00:08:21.279 "default_time2wait": 2, 00:08:21.279 "disable_chap": false, 00:08:21.279 "error_recovery_level": 0, 00:08:21.279 "first_burst_length": 8192, 00:08:21.279 "immediate_data": true, 00:08:21.279 "immediate_data_pool_size": 16384, 00:08:21.279 "max_connections_per_session": 2, 00:08:21.279 "max_large_datain_per_connection": 64, 00:08:21.279 "max_queue_depth": 64, 00:08:21.279 "max_r2t_per_connection": 4, 00:08:21.279 "max_sessions": 128, 00:08:21.279 "mutual_chap": false, 00:08:21.279 "node_base": "iqn.2016-06.io.spdk", 00:08:21.279 "nop_in_interval": 30, 00:08:21.279 "nop_timeout": 60, 00:08:21.279 "pdu_pool_size": 36864, 00:08:21.279 "require_chap": false 00:08:21.279 } 00:08:21.279 } 00:08:21.279 ] 00:08:21.279 } 00:08:21.279 ] 00:08:21.279 } 00:08:21.279 06:10:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:21.279 06:10:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 72051 00:08:21.279 06:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 72051 ']' 00:08:21.279 06:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 72051 00:08:21.279 06:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:08:21.279 06:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:21.279 06:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72051 00:08:21.279 06:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:21.279 killing process with pid 72051 00:08:21.279 06:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:21.279 06:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72051' 00:08:21.279 06:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 72051 00:08:21.279 06:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 72051 00:08:21.537 06:10:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:21.537 06:10:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=72072 00:08:21.537 06:10:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:26.800 06:10:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 72072 00:08:26.800 06:10:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 72072 ']' 00:08:26.800 06:10:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 72072 00:08:26.800 06:10:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:08:26.800 06:10:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:26.800 06:10:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72072 00:08:26.800 06:10:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:26.800 06:10:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:26.800 06:10:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72072' 00:08:26.800 killing process with pid 72072 00:08:26.800 06:10:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 72072 00:08:26.800 06:10:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 72072 00:08:26.800 06:10:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:26.800 06:10:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:26.800 00:08:26.800 real 0m5.943s 00:08:26.800 user 0m5.624s 00:08:26.800 sys 0m0.374s 00:08:26.800 06:10:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.800 06:10:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:26.800 ************************************ 00:08:26.800 END TEST skip_rpc_with_json 00:08:26.800 ************************************ 00:08:26.800 06:10:46 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:26.800 06:10:46 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:26.800 06:10:46 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.800 06:10:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.800 ************************************ 00:08:26.800 START TEST skip_rpc_with_delay 00:08:26.800 ************************************ 00:08:26.800 06:10:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:08:26.800 06:10:46 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:26.800 06:10:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:08:26.800 06:10:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:26.800 06:10:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:26.800 06:10:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:26.800 06:10:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:26.800 06:10:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:26.800 06:10:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:26.800 06:10:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:26.800 06:10:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:26.800 06:10:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:26.800 06:10:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:26.800 [2024-11-27 06:10:46.169052] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:26.800 06:10:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:08:26.800 06:10:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:26.800 06:10:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:26.800 06:10:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:26.800 00:08:26.800 real 0m0.059s 00:08:26.800 user 0m0.033s 00:08:26.800 sys 0m0.025s 00:08:26.800 06:10:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.800 06:10:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:26.800 ************************************ 00:08:26.800 END TEST skip_rpc_with_delay 00:08:26.800 ************************************ 00:08:26.800 06:10:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:26.800 06:10:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:26.800 06:10:46 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:26.800 06:10:46 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:26.800 06:10:46 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.800 06:10:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.800 ************************************ 00:08:26.800 START TEST exit_on_failed_rpc_init 00:08:26.800 ************************************ 00:08:26.800 06:10:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:08:26.800 06:10:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=72176 00:08:26.800 06:10:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 72176 00:08:26.800 06:10:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 72176 ']' 00:08:26.800 06:10:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.800 06:10:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:26.800 06:10:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:26.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.800 06:10:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.800 06:10:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:26.801 06:10:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:26.801 [2024-11-27 06:10:46.259807] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:08:26.801 [2024-11-27 06:10:46.259864] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72176 ] 00:08:27.059 [2024-11-27 06:10:46.371266] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:27.059 [2024-11-27 06:10:46.394181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.059 [2024-11-27 06:10:46.411474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.059 06:10:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:27.059 06:10:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:08:27.059 06:10:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:27.059 06:10:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:27.059 06:10:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:08:27.059 06:10:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:27.059 06:10:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:27.059 06:10:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:27.059 06:10:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:27.059 06:10:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:27.059 06:10:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:27.059 06:10:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:27.059 06:10:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:27.059 06:10:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:27.059 06:10:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:27.318 [2024-11-27 06:10:46.616735] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:08:27.318 [2024-11-27 06:10:46.616802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72192 ] 00:08:27.318 [2024-11-27 06:10:46.730716] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:27.318 [2024-11-27 06:10:46.757361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.318 [2024-11-27 06:10:46.774596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.318 [2024-11-27 06:10:46.774652] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:27.318 [2024-11-27 06:10:46.774658] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:27.318 [2024-11-27 06:10:46.774662] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:27.318 06:10:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:08:27.318 06:10:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:27.318 06:10:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:08:27.318 06:10:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:08:27.318 06:10:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:08:27.318 06:10:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:27.318 06:10:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:27.318 06:10:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 72176 00:08:27.318 06:10:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 72176 ']' 00:08:27.318 06:10:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 72176 00:08:27.318 06:10:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:08:27.318 06:10:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:27.318 06:10:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72176 00:08:27.318 06:10:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:27.318 06:10:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:27.318 killing process with pid 72176 00:08:27.318 06:10:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72176' 00:08:27.318 06:10:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 72176 00:08:27.318 06:10:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 72176 00:08:27.576 00:08:27.576 real 0m0.793s 00:08:27.576 user 0m0.862s 00:08:27.576 sys 0m0.231s 00:08:27.576 06:10:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.576 ************************************ 00:08:27.576 END TEST exit_on_failed_rpc_init 00:08:27.576 ************************************ 00:08:27.576 06:10:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:27.576 06:10:47 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:27.576 00:08:27.576 real 0m12.347s 00:08:27.576 user 0m11.613s 00:08:27.576 sys 0m0.961s 00:08:27.576 06:10:47 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.576 06:10:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.576 ************************************ 00:08:27.576 END TEST skip_rpc 00:08:27.576 ************************************ 00:08:27.576 06:10:47 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:27.576 06:10:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:27.576 06:10:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.576 06:10:47 -- common/autotest_common.sh@10 -- # set +x 00:08:27.576 ************************************ 00:08:27.576 START TEST rpc_client 00:08:27.576 ************************************ 00:08:27.576 06:10:47 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:27.835 * Looking for test storage... 00:08:27.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:08:27.835 06:10:47 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:27.835 06:10:47 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:08:27.835 06:10:47 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:27.835 06:10:47 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:27.835 06:10:47 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:27.835 06:10:47 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:27.835 06:10:47 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:27.835 06:10:47 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.835 06:10:47 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:08:27.835 06:10:47 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:08:27.835 06:10:47 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:08:27.835 06:10:47 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:08:27.835 06:10:47 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:08:27.835 06:10:47 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:08:27.835 06:10:47 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:27.835 06:10:47 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:08:27.835 06:10:47 rpc_client -- scripts/common.sh@345 -- # : 1 00:08:27.835 06:10:47 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:27.835 06:10:47 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.835 06:10:47 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:08:27.835 06:10:47 rpc_client -- scripts/common.sh@353 -- # local d=1 00:08:27.835 06:10:47 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.835 06:10:47 rpc_client -- scripts/common.sh@355 -- # echo 1 00:08:27.835 06:10:47 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:08:27.835 06:10:47 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:08:27.835 06:10:47 rpc_client -- scripts/common.sh@353 -- # local d=2 00:08:27.835 06:10:47 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.835 06:10:47 rpc_client -- scripts/common.sh@355 -- # echo 2 00:08:27.835 06:10:47 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:08:27.835 06:10:47 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:27.835 06:10:47 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:27.835 06:10:47 rpc_client -- scripts/common.sh@368 -- # return 0 00:08:27.835 06:10:47 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.835 06:10:47 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:27.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.835 --rc genhtml_branch_coverage=1 00:08:27.835 --rc genhtml_function_coverage=1 00:08:27.835 --rc genhtml_legend=1 00:08:27.835 --rc geninfo_all_blocks=1 00:08:27.835 --rc geninfo_unexecuted_blocks=1 00:08:27.835 00:08:27.835 ' 00:08:27.835 06:10:47 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:27.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.835 --rc genhtml_branch_coverage=1 00:08:27.835 --rc genhtml_function_coverage=1 00:08:27.835 --rc genhtml_legend=1 00:08:27.835 --rc geninfo_all_blocks=1 00:08:27.835 --rc geninfo_unexecuted_blocks=1 00:08:27.835 00:08:27.835 ' 00:08:27.835 06:10:47 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:27.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.835 --rc genhtml_branch_coverage=1 00:08:27.835 --rc genhtml_function_coverage=1 00:08:27.835 --rc genhtml_legend=1 00:08:27.835 --rc geninfo_all_blocks=1 00:08:27.835 --rc geninfo_unexecuted_blocks=1 00:08:27.835 00:08:27.835 ' 00:08:27.835 06:10:47 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:27.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.835 --rc genhtml_branch_coverage=1 00:08:27.835 --rc genhtml_function_coverage=1 00:08:27.835 --rc genhtml_legend=1 00:08:27.835 --rc geninfo_all_blocks=1 00:08:27.835 --rc geninfo_unexecuted_blocks=1 00:08:27.835 00:08:27.835 ' 00:08:27.835 06:10:47 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:08:27.835 OK 00:08:27.835 06:10:47 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:27.835 00:08:27.835 real 0m0.153s 00:08:27.835 user 0m0.092s 00:08:27.835 sys 0m0.063s 00:08:27.835 06:10:47 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.835 06:10:47 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:27.835 ************************************ 00:08:27.835 END TEST rpc_client 00:08:27.835 ************************************ 00:08:27.835 06:10:47 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:27.835 06:10:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:27.835 06:10:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.835 06:10:47 -- common/autotest_common.sh@10 -- # set +x 00:08:27.835 ************************************ 00:08:27.835 START TEST json_config 00:08:27.835 ************************************ 00:08:27.835 06:10:47 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:27.835 06:10:47 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:27.835 06:10:47 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:08:27.835 06:10:47 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:28.094 06:10:47 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:28.094 06:10:47 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:28.094 06:10:47 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:28.094 06:10:47 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:28.094 06:10:47 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:08:28.094 06:10:47 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:08:28.094 06:10:47 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:08:28.094 06:10:47 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:08:28.094 06:10:47 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:08:28.094 06:10:47 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:08:28.094 06:10:47 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:08:28.094 06:10:47 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:28.094 06:10:47 json_config -- scripts/common.sh@344 -- # case "$op" in 00:08:28.094 06:10:47 json_config -- scripts/common.sh@345 -- # : 1 00:08:28.094 06:10:47 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:28.094 06:10:47 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:28.094 06:10:47 json_config -- scripts/common.sh@365 -- # decimal 1 00:08:28.094 06:10:47 json_config -- scripts/common.sh@353 -- # local d=1 00:08:28.094 06:10:47 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:28.094 06:10:47 json_config -- scripts/common.sh@355 -- # echo 1 00:08:28.094 06:10:47 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:08:28.094 06:10:47 json_config -- scripts/common.sh@366 -- # decimal 2 00:08:28.094 06:10:47 json_config -- scripts/common.sh@353 -- # local d=2 00:08:28.094 06:10:47 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:28.094 06:10:47 json_config -- scripts/common.sh@355 -- # echo 2 00:08:28.094 06:10:47 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:08:28.094 06:10:47 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:28.094 06:10:47 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:28.094 06:10:47 json_config -- scripts/common.sh@368 -- # return 0 00:08:28.094 06:10:47 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:28.094 06:10:47 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:28.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.094 --rc genhtml_branch_coverage=1 00:08:28.094 --rc genhtml_function_coverage=1 00:08:28.094 --rc genhtml_legend=1 00:08:28.094 --rc geninfo_all_blocks=1 00:08:28.094 --rc geninfo_unexecuted_blocks=1 00:08:28.094 00:08:28.094 ' 00:08:28.094 06:10:47 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:28.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.094 --rc genhtml_branch_coverage=1 00:08:28.094 --rc genhtml_function_coverage=1 00:08:28.094 --rc genhtml_legend=1 00:08:28.095 --rc geninfo_all_blocks=1 00:08:28.095 --rc geninfo_unexecuted_blocks=1 00:08:28.095 00:08:28.095 ' 00:08:28.095 06:10:47 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:28.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.095 --rc genhtml_branch_coverage=1 00:08:28.095 --rc genhtml_function_coverage=1 00:08:28.095 --rc genhtml_legend=1 00:08:28.095 --rc geninfo_all_blocks=1 00:08:28.095 --rc geninfo_unexecuted_blocks=1 00:08:28.095 00:08:28.095 ' 00:08:28.095 06:10:47 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:28.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.095 --rc genhtml_branch_coverage=1 00:08:28.095 --rc genhtml_function_coverage=1 00:08:28.095 --rc genhtml_legend=1 00:08:28.095 --rc geninfo_all_blocks=1 00:08:28.095 --rc geninfo_unexecuted_blocks=1 00:08:28.095 00:08:28.095 ' 00:08:28.095 06:10:47 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:28.095 06:10:47 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:28.095 06:10:47 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:28.095 06:10:47 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:28.095 06:10:47 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:28.095 06:10:47 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:28.095 06:10:47 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:28.095 06:10:47 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:28.095 06:10:47 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:28.095 06:10:47 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:28.095 06:10:47 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:28.095 06:10:47 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:28.095 06:10:47 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:08:28.095 06:10:47 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:08:28.095 06:10:47 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:28.095 06:10:47 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:28.095 06:10:47 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:28.095 06:10:47 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:28.095 06:10:47 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:28.095 06:10:47 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:08:28.095 06:10:47 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.095 06:10:47 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.095 06:10:47 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.095 06:10:47 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.095 06:10:47 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.095 06:10:47 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.095 06:10:47 json_config -- paths/export.sh@5 -- # export PATH 00:08:28.095 06:10:47 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.095 06:10:47 json_config -- nvmf/common.sh@51 -- # : 0 00:08:28.095 06:10:47 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:28.095 06:10:47 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:28.095 06:10:47 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:28.095 06:10:47 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:28.095 06:10:47 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:28.095 06:10:47 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:28.095 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:28.095 06:10:47 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:28.095 06:10:47 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:28.095 06:10:47 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:28.095 06:10:47 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:28.095 06:10:47 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:28.095 06:10:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:28.095 06:10:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:28.095 06:10:47 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:28.095 06:10:47 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:08:28.095 06:10:47 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:08:28.095 06:10:47 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:08:28.095 06:10:47 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:08:28.095 06:10:47 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:08:28.095 06:10:47 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:08:28.095 06:10:47 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:08:28.095 06:10:47 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:08:28.095 06:10:47 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:08:28.095 06:10:47 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:28.095 INFO: JSON configuration test init 00:08:28.095 06:10:47 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:08:28.095 06:10:47 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:08:28.095 06:10:47 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:08:28.095 06:10:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:28.095 06:10:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:28.095 06:10:47 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:08:28.095 06:10:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:28.095 06:10:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:28.095 06:10:47 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:08:28.095 06:10:47 json_config -- json_config/common.sh@9 -- # local app=target 00:08:28.095 06:10:47 json_config -- json_config/common.sh@10 -- # shift 00:08:28.095 06:10:47 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:28.095 06:10:47 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:28.095 06:10:47 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:28.095 06:10:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:28.095 06:10:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:28.095 06:10:47 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=72326 00:08:28.095 Waiting for target to run... 00:08:28.095 06:10:47 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:28.095 06:10:47 json_config -- json_config/common.sh@25 -- # waitforlisten 72326 /var/tmp/spdk_tgt.sock 00:08:28.095 06:10:47 json_config -- common/autotest_common.sh@835 -- # '[' -z 72326 ']' 00:08:28.095 06:10:47 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:28.095 06:10:47 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:28.096 06:10:47 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:28.096 06:10:47 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.096 06:10:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:28.096 06:10:47 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:08:28.096 [2024-11-27 06:10:47.457535] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:08:28.096 [2024-11-27 06:10:47.457599] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72326 ] 00:08:28.354 [2024-11-27 06:10:47.724675] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:28.354 [2024-11-27 06:10:47.749454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.354 [2024-11-27 06:10:47.759692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.919 06:10:48 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.919 06:10:48 json_config -- common/autotest_common.sh@868 -- # return 0 00:08:28.919 00:08:28.919 06:10:48 json_config -- json_config/common.sh@26 -- # echo '' 00:08:28.919 06:10:48 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:08:28.919 06:10:48 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:08:28.919 06:10:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:28.919 06:10:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:28.919 06:10:48 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:08:28.919 06:10:48 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:08:28.919 06:10:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:28.919 06:10:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:28.920 06:10:48 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:08:28.920 06:10:48 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:08:28.920 06:10:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:08:29.486 06:10:48 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:08:29.486 06:10:48 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:08:29.486 06:10:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:29.486 06:10:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:29.486 06:10:48 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:08:29.486 06:10:48 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:08:29.486 06:10:48 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:08:29.486 06:10:48 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:08:29.486 06:10:48 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:08:29.486 06:10:48 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:08:29.486 06:10:48 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:08:29.486 06:10:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:08:29.486 06:10:48 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:08:29.486 06:10:48 json_config -- json_config/json_config.sh@51 -- # local get_types 00:08:29.486 06:10:48 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:08:29.486 06:10:48 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:08:29.486 06:10:48 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:08:29.486 06:10:48 json_config -- json_config/json_config.sh@54 -- # sort 00:08:29.486 06:10:48 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:08:29.486 06:10:48 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:08:29.486 06:10:48 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:08:29.486 06:10:48 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:08:29.486 06:10:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:29.486 06:10:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:29.486 06:10:48 json_config -- json_config/json_config.sh@62 -- # return 0 00:08:29.486 06:10:48 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:08:29.486 06:10:48 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:08:29.486 06:10:48 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:08:29.486 06:10:48 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:08:29.486 06:10:48 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:08:29.486 06:10:49 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:08:29.486 06:10:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:29.486 06:10:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:29.486 06:10:49 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:08:29.486 06:10:49 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:08:29.486 06:10:49 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:08:29.486 06:10:49 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:29.486 06:10:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:29.743 MallocForNvmf0 00:08:29.743 06:10:49 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:29.743 06:10:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:30.001 MallocForNvmf1 00:08:30.001 06:10:49 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:08:30.001 06:10:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:08:30.259 [2024-11-27 06:10:49.608522] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:30.259 06:10:49 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:30.259 06:10:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:30.517 06:10:49 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:30.517 06:10:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:30.775 06:10:50 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:30.775 06:10:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:30.775 06:10:50 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:30.775 06:10:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:31.033 [2024-11-27 06:10:50.448823] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:31.033 06:10:50 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:08:31.033 06:10:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:31.033 06:10:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:31.033 06:10:50 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:08:31.033 06:10:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:31.033 06:10:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:31.033 06:10:50 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:08:31.033 06:10:50 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:31.033 06:10:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:31.292 MallocBdevForConfigChangeCheck 00:08:31.292 06:10:50 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:08:31.292 06:10:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:31.292 06:10:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:31.292 06:10:50 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:08:31.292 06:10:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:31.860 INFO: shutting down applications... 00:08:31.860 06:10:51 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:08:31.860 06:10:51 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:08:31.860 06:10:51 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:08:31.860 06:10:51 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:08:31.861 06:10:51 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:08:31.861 Calling clear_iscsi_subsystem 00:08:31.861 Calling clear_nvmf_subsystem 00:08:31.861 Calling clear_nbd_subsystem 00:08:31.861 Calling clear_ublk_subsystem 00:08:31.861 Calling clear_vhost_blk_subsystem 00:08:31.861 Calling clear_vhost_scsi_subsystem 00:08:31.861 Calling clear_bdev_subsystem 00:08:32.119 06:10:51 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:08:32.119 06:10:51 json_config -- json_config/json_config.sh@350 -- # count=100 00:08:32.119 06:10:51 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:08:32.119 06:10:51 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:32.119 06:10:51 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:08:32.119 06:10:51 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:08:32.378 06:10:51 json_config -- json_config/json_config.sh@352 -- # break 00:08:32.378 06:10:51 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:08:32.378 06:10:51 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:08:32.378 06:10:51 json_config -- json_config/common.sh@31 -- # local app=target 00:08:32.378 06:10:51 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:32.378 06:10:51 json_config -- json_config/common.sh@35 -- # [[ -n 72326 ]] 00:08:32.378 06:10:51 json_config -- json_config/common.sh@38 -- # kill -SIGINT 72326 00:08:32.378 06:10:51 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:32.378 06:10:51 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:32.378 06:10:51 json_config -- json_config/common.sh@41 -- # kill -0 72326 00:08:32.378 06:10:51 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:08:32.946 06:10:52 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:08:32.946 06:10:52 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:32.946 06:10:52 json_config -- json_config/common.sh@41 -- # kill -0 72326 00:08:32.946 06:10:52 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:32.946 06:10:52 json_config -- json_config/common.sh@43 -- # break 00:08:32.946 06:10:52 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:32.946 SPDK target shutdown done 00:08:32.946 06:10:52 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:32.946 INFO: relaunching applications... 00:08:32.946 06:10:52 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:08:32.946 06:10:52 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:32.946 06:10:52 json_config -- json_config/common.sh@9 -- # local app=target 00:08:32.946 06:10:52 json_config -- json_config/common.sh@10 -- # shift 00:08:32.946 06:10:52 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:32.946 06:10:52 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:32.946 06:10:52 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:32.946 06:10:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:32.946 06:10:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:32.946 06:10:52 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=72595 00:08:32.946 Waiting for target to run... 00:08:32.946 06:10:52 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:32.946 06:10:52 json_config -- json_config/common.sh@25 -- # waitforlisten 72595 /var/tmp/spdk_tgt.sock 00:08:32.946 06:10:52 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:32.946 06:10:52 json_config -- common/autotest_common.sh@835 -- # '[' -z 72595 ']' 00:08:32.946 06:10:52 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:32.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:32.946 06:10:52 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:32.946 06:10:52 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:32.946 06:10:52 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:32.946 06:10:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:32.946 [2024-11-27 06:10:52.315189] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:08:32.946 [2024-11-27 06:10:52.315263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72595 ] 00:08:33.207 [2024-11-27 06:10:52.605067] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:33.207 [2024-11-27 06:10:52.632264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.207 [2024-11-27 06:10:52.644126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.468 [2024-11-27 06:10:52.953421] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.468 [2024-11-27 06:10:52.985461] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:33.729 06:10:53 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:33.729 00:08:33.729 06:10:53 json_config -- common/autotest_common.sh@868 -- # return 0 00:08:33.729 06:10:53 json_config -- json_config/common.sh@26 -- # echo '' 00:08:33.729 INFO: Checking if target configuration is the same... 00:08:33.729 06:10:53 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:08:33.729 06:10:53 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:08:33.729 06:10:53 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:33.729 06:10:53 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:08:33.729 06:10:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:33.990 + '[' 2 -ne 2 ']' 00:08:33.990 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:08:33.990 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:08:33.990 + rootdir=/home/vagrant/spdk_repo/spdk 00:08:33.990 +++ basename /dev/fd/62 00:08:33.990 ++ mktemp /tmp/62.XXX 00:08:33.990 + tmp_file_1=/tmp/62.x7U 00:08:33.990 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:33.990 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:33.990 + tmp_file_2=/tmp/spdk_tgt_config.json.EgH 00:08:33.990 + ret=0 00:08:33.990 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:34.250 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:34.250 + diff -u /tmp/62.x7U /tmp/spdk_tgt_config.json.EgH 00:08:34.250 + echo 'INFO: JSON config files are the same' 00:08:34.250 INFO: JSON config files are the same 00:08:34.250 + rm /tmp/62.x7U /tmp/spdk_tgt_config.json.EgH 00:08:34.250 + exit 0 00:08:34.250 06:10:53 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:08:34.250 INFO: changing configuration and checking if this can be detected... 00:08:34.250 06:10:53 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:08:34.250 06:10:53 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:34.250 06:10:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:34.511 06:10:53 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:34.511 06:10:53 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:08:34.511 06:10:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:34.511 + '[' 2 -ne 2 ']' 00:08:34.511 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:08:34.511 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:08:34.511 + rootdir=/home/vagrant/spdk_repo/spdk 00:08:34.511 +++ basename /dev/fd/62 00:08:34.511 ++ mktemp /tmp/62.XXX 00:08:34.511 + tmp_file_1=/tmp/62.MUV 00:08:34.511 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:34.511 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:34.511 + tmp_file_2=/tmp/spdk_tgt_config.json.D6B 00:08:34.511 + ret=0 00:08:34.511 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:34.770 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:34.770 + diff -u /tmp/62.MUV /tmp/spdk_tgt_config.json.D6B 00:08:34.770 + ret=1 00:08:34.770 + echo '=== Start of file: /tmp/62.MUV ===' 00:08:34.770 + cat /tmp/62.MUV 00:08:34.770 + echo '=== End of file: /tmp/62.MUV ===' 00:08:34.770 + echo '' 00:08:34.770 + echo '=== Start of file: /tmp/spdk_tgt_config.json.D6B ===' 00:08:34.770 + cat /tmp/spdk_tgt_config.json.D6B 00:08:34.770 + echo '=== End of file: /tmp/spdk_tgt_config.json.D6B ===' 00:08:34.770 + echo '' 00:08:34.770 + rm /tmp/62.MUV /tmp/spdk_tgt_config.json.D6B 00:08:34.770 + exit 1 00:08:34.770 INFO: configuration change detected. 00:08:34.770 06:10:54 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:08:34.770 06:10:54 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:08:34.770 06:10:54 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:08:34.770 06:10:54 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:34.770 06:10:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:34.770 06:10:54 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:08:34.770 06:10:54 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:08:34.770 06:10:54 json_config -- json_config/json_config.sh@324 -- # [[ -n 72595 ]] 00:08:34.770 06:10:54 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:08:34.770 06:10:54 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:08:34.770 06:10:54 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:34.770 06:10:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:34.770 06:10:54 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:08:34.770 06:10:54 json_config -- json_config/json_config.sh@200 -- # uname -s 00:08:34.770 06:10:54 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:08:34.770 06:10:54 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:08:34.770 06:10:54 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:08:34.770 06:10:54 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:08:34.770 06:10:54 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:34.770 06:10:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:35.030 06:10:54 json_config -- json_config/json_config.sh@330 -- # killprocess 72595 00:08:35.030 06:10:54 json_config -- common/autotest_common.sh@954 -- # '[' -z 72595 ']' 00:08:35.030 06:10:54 json_config -- common/autotest_common.sh@958 -- # kill -0 72595 00:08:35.030 06:10:54 json_config -- common/autotest_common.sh@959 -- # uname 00:08:35.030 06:10:54 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:35.030 06:10:54 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72595 00:08:35.030 06:10:54 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:35.030 06:10:54 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:35.030 killing process with pid 72595 00:08:35.030 06:10:54 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72595' 00:08:35.030 06:10:54 json_config -- common/autotest_common.sh@973 -- # kill 72595 00:08:35.030 06:10:54 json_config -- common/autotest_common.sh@978 -- # wait 72595 00:08:35.030 06:10:54 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:35.031 06:10:54 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:08:35.031 06:10:54 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:35.031 06:10:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:35.031 06:10:54 json_config -- json_config/json_config.sh@335 -- # return 0 00:08:35.031 INFO: Success 00:08:35.031 06:10:54 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:08:35.031 00:08:35.031 real 0m7.211s 00:08:35.031 user 0m10.103s 00:08:35.031 sys 0m1.283s 00:08:35.031 ************************************ 00:08:35.031 END TEST json_config 00:08:35.031 ************************************ 00:08:35.031 06:10:54 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.031 06:10:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:35.031 06:10:54 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:35.031 06:10:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:35.031 06:10:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.031 06:10:54 -- common/autotest_common.sh@10 -- # set +x 00:08:35.031 ************************************ 00:08:35.031 START TEST json_config_extra_key 00:08:35.031 ************************************ 00:08:35.031 06:10:54 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:35.293 06:10:54 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:35.293 06:10:54 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:08:35.293 06:10:54 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:35.293 06:10:54 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:35.293 06:10:54 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:35.293 06:10:54 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:35.293 06:10:54 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:35.293 06:10:54 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:08:35.293 06:10:54 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:08:35.293 06:10:54 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:08:35.293 06:10:54 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:08:35.293 06:10:54 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:08:35.293 06:10:54 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:08:35.293 06:10:54 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:08:35.293 06:10:54 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:35.293 06:10:54 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:08:35.293 06:10:54 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:08:35.293 06:10:54 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:35.293 06:10:54 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:35.293 06:10:54 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:08:35.293 06:10:54 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:08:35.293 06:10:54 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:35.293 06:10:54 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:08:35.293 06:10:54 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:08:35.293 06:10:54 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:08:35.293 06:10:54 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:08:35.293 06:10:54 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:35.293 06:10:54 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:08:35.293 06:10:54 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:08:35.293 06:10:54 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:35.293 06:10:54 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:35.293 06:10:54 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:08:35.293 06:10:54 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:35.293 06:10:54 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:35.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.293 --rc genhtml_branch_coverage=1 00:08:35.293 --rc genhtml_function_coverage=1 00:08:35.293 --rc genhtml_legend=1 00:08:35.293 --rc geninfo_all_blocks=1 00:08:35.293 --rc geninfo_unexecuted_blocks=1 00:08:35.293 00:08:35.293 ' 00:08:35.293 06:10:54 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:35.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.293 --rc genhtml_branch_coverage=1 00:08:35.293 --rc genhtml_function_coverage=1 00:08:35.293 --rc genhtml_legend=1 00:08:35.293 --rc geninfo_all_blocks=1 00:08:35.293 --rc geninfo_unexecuted_blocks=1 00:08:35.293 00:08:35.293 ' 00:08:35.293 06:10:54 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:35.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.293 --rc genhtml_branch_coverage=1 00:08:35.293 --rc genhtml_function_coverage=1 00:08:35.293 --rc genhtml_legend=1 00:08:35.293 --rc geninfo_all_blocks=1 00:08:35.293 --rc geninfo_unexecuted_blocks=1 00:08:35.293 00:08:35.293 ' 00:08:35.293 06:10:54 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:35.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.293 --rc genhtml_branch_coverage=1 00:08:35.293 --rc genhtml_function_coverage=1 00:08:35.293 --rc genhtml_legend=1 00:08:35.293 --rc geninfo_all_blocks=1 00:08:35.293 --rc geninfo_unexecuted_blocks=1 00:08:35.293 00:08:35.293 ' 00:08:35.293 06:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:35.293 06:10:54 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:35.293 06:10:54 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:35.293 06:10:54 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:35.293 06:10:54 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:35.293 06:10:54 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:35.293 06:10:54 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:35.293 06:10:54 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:35.293 06:10:54 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:35.293 06:10:54 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:35.293 06:10:54 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:35.293 06:10:54 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:35.293 06:10:54 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:08:35.293 06:10:54 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:08:35.293 06:10:54 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:35.293 06:10:54 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:35.293 06:10:54 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:35.293 06:10:54 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:35.293 06:10:54 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:35.293 06:10:54 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:08:35.293 06:10:54 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.293 06:10:54 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.293 06:10:54 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.293 06:10:54 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.293 06:10:54 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.294 06:10:54 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.294 06:10:54 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:35.294 06:10:54 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.294 06:10:54 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:08:35.294 06:10:54 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:35.294 06:10:54 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:35.294 06:10:54 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:35.294 06:10:54 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:35.294 06:10:54 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:35.294 06:10:54 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:35.294 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:35.294 06:10:54 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:35.294 06:10:54 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:35.294 06:10:54 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:35.294 06:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:35.294 06:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:35.294 06:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:35.294 06:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:35.294 06:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:35.294 06:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:35.294 06:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:35.294 06:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:08:35.294 06:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:35.294 06:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:35.294 INFO: launching applications... 00:08:35.294 06:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:35.294 06:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:35.294 06:10:54 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:35.294 06:10:54 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:35.294 06:10:54 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:35.294 06:10:54 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:35.294 06:10:54 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:35.294 06:10:54 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:35.294 06:10:54 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:35.294 06:10:54 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=72773 00:08:35.294 Waiting for target to run... 00:08:35.294 06:10:54 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:35.294 06:10:54 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 72773 /var/tmp/spdk_tgt.sock 00:08:35.294 06:10:54 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 72773 ']' 00:08:35.294 06:10:54 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:35.294 06:10:54 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:35.294 06:10:54 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:35.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:35.294 06:10:54 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:35.294 06:10:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:35.294 06:10:54 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:35.294 [2024-11-27 06:10:54.710133] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:08:35.294 [2024-11-27 06:10:54.710207] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72773 ] 00:08:35.555 [2024-11-27 06:10:54.980618] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:35.555 [2024-11-27 06:10:55.005801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.555 [2024-11-27 06:10:55.016314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.127 06:10:55 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:36.128 06:10:55 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:08:36.128 00:08:36.128 06:10:55 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:36.128 INFO: shutting down applications... 00:08:36.128 06:10:55 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:36.128 06:10:55 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:36.128 06:10:55 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:36.128 06:10:55 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:36.128 06:10:55 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 72773 ]] 00:08:36.128 06:10:55 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 72773 00:08:36.128 06:10:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:36.128 06:10:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:36.128 06:10:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 72773 00:08:36.128 06:10:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:36.700 06:10:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:36.700 06:10:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:36.700 06:10:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 72773 00:08:36.700 06:10:56 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:36.700 06:10:56 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:36.700 06:10:56 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:36.700 SPDK target shutdown done 00:08:36.700 Success 00:08:36.700 06:10:56 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:36.700 06:10:56 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:36.700 00:08:36.700 real 0m1.566s 00:08:36.700 user 0m1.282s 00:08:36.700 sys 0m0.284s 00:08:36.700 06:10:56 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.700 06:10:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:36.700 ************************************ 00:08:36.700 END TEST json_config_extra_key 00:08:36.700 ************************************ 00:08:36.700 06:10:56 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:36.700 06:10:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:36.700 06:10:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.700 06:10:56 -- common/autotest_common.sh@10 -- # set +x 00:08:36.700 ************************************ 00:08:36.700 START TEST alias_rpc 00:08:36.700 ************************************ 00:08:36.700 06:10:56 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:36.700 * Looking for test storage... 00:08:36.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:08:36.700 06:10:56 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:36.700 06:10:56 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:36.700 06:10:56 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:36.962 06:10:56 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:36.962 06:10:56 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:36.962 06:10:56 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:36.962 06:10:56 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:36.962 06:10:56 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:36.962 06:10:56 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:36.962 06:10:56 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:36.962 06:10:56 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:36.962 06:10:56 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:36.962 06:10:56 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:36.962 06:10:56 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:36.962 06:10:56 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:36.962 06:10:56 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:36.962 06:10:56 alias_rpc -- scripts/common.sh@345 -- # : 1 00:08:36.962 06:10:56 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:36.962 06:10:56 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:36.962 06:10:56 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:36.962 06:10:56 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:08:36.962 06:10:56 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:36.962 06:10:56 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:08:36.962 06:10:56 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:36.962 06:10:56 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:36.962 06:10:56 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:08:36.962 06:10:56 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:36.962 06:10:56 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:08:36.962 06:10:56 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:36.962 06:10:56 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:36.962 06:10:56 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:36.962 06:10:56 alias_rpc -- scripts/common.sh@368 -- # return 0 00:08:36.962 06:10:56 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:36.962 06:10:56 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:36.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.962 --rc genhtml_branch_coverage=1 00:08:36.962 --rc genhtml_function_coverage=1 00:08:36.962 --rc genhtml_legend=1 00:08:36.962 --rc geninfo_all_blocks=1 00:08:36.962 --rc geninfo_unexecuted_blocks=1 00:08:36.962 00:08:36.962 ' 00:08:36.962 06:10:56 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:36.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.962 --rc genhtml_branch_coverage=1 00:08:36.962 --rc genhtml_function_coverage=1 00:08:36.962 --rc genhtml_legend=1 00:08:36.962 --rc geninfo_all_blocks=1 00:08:36.962 --rc geninfo_unexecuted_blocks=1 00:08:36.962 00:08:36.962 ' 00:08:36.962 06:10:56 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:36.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.962 --rc genhtml_branch_coverage=1 00:08:36.962 --rc genhtml_function_coverage=1 00:08:36.962 --rc genhtml_legend=1 00:08:36.962 --rc geninfo_all_blocks=1 00:08:36.962 --rc geninfo_unexecuted_blocks=1 00:08:36.962 00:08:36.962 ' 00:08:36.962 06:10:56 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:36.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.962 --rc genhtml_branch_coverage=1 00:08:36.962 --rc genhtml_function_coverage=1 00:08:36.962 --rc genhtml_legend=1 00:08:36.962 --rc geninfo_all_blocks=1 00:08:36.962 --rc geninfo_unexecuted_blocks=1 00:08:36.962 00:08:36.962 ' 00:08:36.962 06:10:56 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:36.962 06:10:56 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=72858 00:08:36.962 06:10:56 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 72858 00:08:36.962 06:10:56 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 72858 ']' 00:08:36.962 06:10:56 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.962 06:10:56 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:36.962 06:10:56 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.962 06:10:56 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:36.962 06:10:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.962 06:10:56 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:36.962 [2024-11-27 06:10:56.311952] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:08:36.962 [2024-11-27 06:10:56.312023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72858 ] 00:08:36.962 [2024-11-27 06:10:56.425875] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:36.962 [2024-11-27 06:10:56.450173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.962 [2024-11-27 06:10:56.468829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.904 06:10:57 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:37.904 06:10:57 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:37.904 06:10:57 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:08:37.904 06:10:57 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 72858 00:08:37.904 06:10:57 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 72858 ']' 00:08:37.904 06:10:57 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 72858 00:08:37.904 06:10:57 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:08:37.904 06:10:57 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:37.904 06:10:57 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72858 00:08:38.164 06:10:57 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:38.164 killing process with pid 72858 00:08:38.164 06:10:57 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:38.164 06:10:57 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72858' 00:08:38.164 06:10:57 alias_rpc -- common/autotest_common.sh@973 -- # kill 72858 00:08:38.164 06:10:57 alias_rpc -- common/autotest_common.sh@978 -- # wait 72858 00:08:38.164 00:08:38.164 real 0m1.500s 00:08:38.164 user 0m1.752s 00:08:38.164 sys 0m0.295s 00:08:38.164 06:10:57 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.164 06:10:57 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.164 ************************************ 00:08:38.164 END TEST alias_rpc 00:08:38.164 ************************************ 00:08:38.164 06:10:57 -- spdk/autotest.sh@163 -- # [[ 1 -eq 0 ]] 00:08:38.164 06:10:57 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:38.164 06:10:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:38.164 06:10:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.164 06:10:57 -- common/autotest_common.sh@10 -- # set +x 00:08:38.164 ************************************ 00:08:38.164 START TEST dpdk_mem_utility 00:08:38.164 ************************************ 00:08:38.164 06:10:57 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:38.424 * Looking for test storage... 00:08:38.425 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:08:38.425 06:10:57 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:38.425 06:10:57 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:08:38.425 06:10:57 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:38.425 06:10:57 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:38.425 06:10:57 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.425 06:10:57 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.425 06:10:57 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.425 06:10:57 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.425 06:10:57 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.425 06:10:57 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.425 06:10:57 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.425 06:10:57 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.425 06:10:57 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.425 06:10:57 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.425 06:10:57 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.425 06:10:57 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:08:38.425 06:10:57 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:08:38.425 06:10:57 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.425 06:10:57 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.425 06:10:57 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:08:38.425 06:10:57 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:08:38.425 06:10:57 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.425 06:10:57 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:08:38.425 06:10:57 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.425 06:10:57 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:08:38.425 06:10:57 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:08:38.425 06:10:57 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.425 06:10:57 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:08:38.425 06:10:57 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.425 06:10:57 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.425 06:10:57 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.425 06:10:57 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:08:38.425 06:10:57 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.425 06:10:57 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:38.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.425 --rc genhtml_branch_coverage=1 00:08:38.425 --rc genhtml_function_coverage=1 00:08:38.425 --rc genhtml_legend=1 00:08:38.425 --rc geninfo_all_blocks=1 00:08:38.425 --rc geninfo_unexecuted_blocks=1 00:08:38.425 00:08:38.425 ' 00:08:38.425 06:10:57 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:38.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.425 --rc genhtml_branch_coverage=1 00:08:38.425 --rc genhtml_function_coverage=1 00:08:38.425 --rc genhtml_legend=1 00:08:38.425 --rc geninfo_all_blocks=1 00:08:38.425 --rc geninfo_unexecuted_blocks=1 00:08:38.425 00:08:38.425 ' 00:08:38.425 06:10:57 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:38.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.425 --rc genhtml_branch_coverage=1 00:08:38.425 --rc genhtml_function_coverage=1 00:08:38.425 --rc genhtml_legend=1 00:08:38.425 --rc geninfo_all_blocks=1 00:08:38.425 --rc geninfo_unexecuted_blocks=1 00:08:38.425 00:08:38.425 ' 00:08:38.425 06:10:57 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:38.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.425 --rc genhtml_branch_coverage=1 00:08:38.425 --rc genhtml_function_coverage=1 00:08:38.425 --rc genhtml_legend=1 00:08:38.425 --rc geninfo_all_blocks=1 00:08:38.425 --rc geninfo_unexecuted_blocks=1 00:08:38.425 00:08:38.425 ' 00:08:38.425 06:10:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:38.425 06:10:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=72952 00:08:38.425 06:10:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 72952 00:08:38.425 06:10:57 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 72952 ']' 00:08:38.425 06:10:57 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.425 06:10:57 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:38.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.425 06:10:57 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.425 06:10:57 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:38.425 06:10:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:38.425 06:10:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:38.425 [2024-11-27 06:10:57.870492] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:08:38.425 [2024-11-27 06:10:57.870558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72952 ] 00:08:38.686 [2024-11-27 06:10:57.982002] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:38.686 [2024-11-27 06:10:58.010273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.686 [2024-11-27 06:10:58.029248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.686 06:10:58 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:38.686 06:10:58 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:08:38.686 06:10:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:38.686 06:10:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:38.686 06:10:58 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.686 06:10:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:38.686 { 00:08:38.686 "filename": "/tmp/spdk_mem_dump.txt" 00:08:38.686 } 00:08:38.686 06:10:58 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.686 06:10:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:38.948 DPDK memory size 818.000000 MiB in 1 heap(s) 00:08:38.948 1 heaps totaling size 818.000000 MiB 00:08:38.948 size: 818.000000 MiB heap id: 0 00:08:38.948 end heaps---------- 00:08:38.948 9 mempools totaling size 603.782043 MiB 00:08:38.948 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:38.948 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:38.948 size: 100.555481 MiB name: bdev_io_72952 00:08:38.948 size: 50.003479 MiB name: msgpool_72952 00:08:38.948 size: 36.509338 MiB name: fsdev_io_72952 00:08:38.948 size: 21.763794 MiB name: PDU_Pool 00:08:38.948 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:38.948 size: 4.133484 MiB name: evtpool_72952 00:08:38.948 size: 0.026123 MiB name: Session_Pool 00:08:38.948 end mempools------- 00:08:38.948 6 memzones totaling size 4.142822 MiB 00:08:38.948 size: 1.000366 MiB name: RG_ring_0_72952 00:08:38.948 size: 1.000366 MiB name: RG_ring_1_72952 00:08:38.948 size: 1.000366 MiB name: RG_ring_4_72952 00:08:38.948 size: 1.000366 MiB name: RG_ring_5_72952 00:08:38.948 size: 0.125366 MiB name: RG_ring_2_72952 00:08:38.948 size: 0.015991 MiB name: RG_ring_3_72952 00:08:38.948 end memzones------- 00:08:38.948 06:10:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:08:38.948 heap id: 0 total size: 818.000000 MiB number of busy elements: 232 number of free elements: 15 00:08:38.948 list of free elements. size: 10.958618 MiB 00:08:38.948 element at address: 0x200019200000 with size: 0.999878 MiB 00:08:38.948 element at address: 0x200019400000 with size: 0.999878 MiB 00:08:38.948 element at address: 0x200000400000 with size: 0.996338 MiB 00:08:38.948 element at address: 0x200032000000 with size: 0.994446 MiB 00:08:38.948 element at address: 0x200006400000 with size: 0.959839 MiB 00:08:38.948 element at address: 0x200012c00000 with size: 0.944275 MiB 00:08:38.948 element at address: 0x200019600000 with size: 0.936584 MiB 00:08:38.948 element at address: 0x200000200000 with size: 0.858093 MiB 00:08:38.948 element at address: 0x20001ae00000 with size: 0.571533 MiB 00:08:38.948 element at address: 0x200000c00000 with size: 0.490845 MiB 00:08:38.948 element at address: 0x20000a600000 with size: 0.489441 MiB 00:08:38.948 element at address: 0x200019800000 with size: 0.485657 MiB 00:08:38.948 element at address: 0x200003e00000 with size: 0.481018 MiB 00:08:38.948 element at address: 0x200028200000 with size: 0.397400 MiB 00:08:38.948 element at address: 0x200000800000 with size: 0.353394 MiB 00:08:38.948 list of standard malloc elements. size: 199.112488 MiB 00:08:38.948 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:08:38.948 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:08:38.948 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:08:38.948 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:08:38.948 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:08:38.948 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:08:38.948 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:08:38.948 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:08:38.948 element at address: 0x2000002fbcc0 with size: 0.000183 MiB 00:08:38.948 element at address: 0x2000003fdec0 with size: 0.000183 MiB 00:08:38.948 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:08:38.948 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:08:38.948 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:08:38.948 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:08:38.948 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:08:38.948 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:08:38.948 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:08:38.948 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:08:38.948 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:08:38.948 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:08:38.948 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:08:38.948 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:08:38.948 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:08:38.948 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:08:38.948 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:08:38.948 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:08:38.948 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:08:38.948 element at address: 0x20000085a780 with size: 0.000183 MiB 00:08:38.948 element at address: 0x20000085a980 with size: 0.000183 MiB 00:08:38.948 element at address: 0x20000085ec40 with size: 0.000183 MiB 00:08:38.948 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:08:38.948 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:08:38.948 element at address: 0x20000087f080 with size: 0.000183 MiB 00:08:38.948 element at address: 0x20000087f140 with size: 0.000183 MiB 00:08:38.948 element at address: 0x20000087f200 with size: 0.000183 MiB 00:08:38.948 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:08:38.948 element at address: 0x20000087f380 with size: 0.000183 MiB 00:08:38.948 element at address: 0x20000087f440 with size: 0.000183 MiB 00:08:38.948 element at address: 0x20000087f500 with size: 0.000183 MiB 00:08:38.948 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:08:38.948 element at address: 0x20000087f680 with size: 0.000183 MiB 00:08:38.948 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:08:38.948 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:08:38.948 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:08:38.948 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:08:38.948 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:08:38.948 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:08:38.948 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:08:38.948 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:08:38.948 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:08:38.948 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:08:38.948 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:08:38.948 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:08:38.948 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:08:38.948 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:08:38.948 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:08:38.948 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:08:38.948 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:08:38.948 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:08:38.948 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:08:38.948 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:08:38.948 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:08:38.948 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:08:38.948 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:08:38.949 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:08:38.949 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:08:38.949 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:08:38.949 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:08:38.949 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:08:38.949 element at address: 0x200000cff000 with size: 0.000183 MiB 00:08:38.949 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:08:38.949 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:08:38.949 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:08:38.949 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:08:38.949 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:08:38.949 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:08:38.949 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:08:38.949 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:08:38.949 element at address: 0x200003efb980 with size: 0.000183 MiB 00:08:38.949 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:08:38.949 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:08:38.949 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:08:38.949 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:08:38.949 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:08:38.949 element at address: 0x200028265bc0 with size: 0.000183 MiB 00:08:38.949 element at address: 0x200028265c80 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826c880 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826d080 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826d140 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826d200 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826d380 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826d440 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826d500 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826d680 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826d740 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826d800 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826d980 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826da40 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826db00 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826de00 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826df80 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826e040 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826e100 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826e280 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826e340 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826e400 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826e580 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826e640 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826e700 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826e880 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826e940 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826f000 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826f180 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826f240 with size: 0.000183 MiB 00:08:38.949 element at address: 0x20002826f300 with size: 0.000183 MiB 00:08:38.950 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:08:38.950 element at address: 0x20002826f480 with size: 0.000183 MiB 00:08:38.950 element at address: 0x20002826f540 with size: 0.000183 MiB 00:08:38.950 element at address: 0x20002826f600 with size: 0.000183 MiB 00:08:38.950 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:08:38.950 element at address: 0x20002826f780 with size: 0.000183 MiB 00:08:38.950 element at address: 0x20002826f840 with size: 0.000183 MiB 00:08:38.950 element at address: 0x20002826f900 with size: 0.000183 MiB 00:08:38.950 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:08:38.950 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:08:38.950 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:08:38.950 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:08:38.950 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:08:38.950 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:08:38.950 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:08:38.950 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:08:38.950 list of memzone associated elements. size: 607.928894 MiB 00:08:38.950 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:08:38.950 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:38.950 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:08:38.950 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:38.950 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:08:38.950 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_72952_0 00:08:38.950 element at address: 0x200000dff380 with size: 48.003052 MiB 00:08:38.950 associated memzone info: size: 48.002930 MiB name: MP_msgpool_72952_0 00:08:38.950 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:08:38.950 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_72952_0 00:08:38.950 element at address: 0x2000199be940 with size: 20.255554 MiB 00:08:38.950 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:38.950 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:08:38.950 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:38.950 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:08:38.950 associated memzone info: size: 3.000122 MiB name: MP_evtpool_72952_0 00:08:38.950 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:08:38.950 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_72952 00:08:38.950 element at address: 0x2000002fbd80 with size: 1.008118 MiB 00:08:38.950 associated memzone info: size: 1.007996 MiB name: MP_evtpool_72952 00:08:38.950 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:08:38.950 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:38.950 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:08:38.950 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:38.950 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:08:38.950 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:38.950 element at address: 0x200003efba40 with size: 1.008118 MiB 00:08:38.950 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:38.950 element at address: 0x200000cff180 with size: 1.000488 MiB 00:08:38.950 associated memzone info: size: 1.000366 MiB name: RG_ring_0_72952 00:08:38.950 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:08:38.950 associated memzone info: size: 1.000366 MiB name: RG_ring_1_72952 00:08:38.950 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:08:38.950 associated memzone info: size: 1.000366 MiB name: RG_ring_4_72952 00:08:38.950 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:08:38.950 associated memzone info: size: 1.000366 MiB name: RG_ring_5_72952 00:08:38.950 element at address: 0x20000087f740 with size: 0.500488 MiB 00:08:38.950 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_72952 00:08:38.950 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:08:38.950 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_72952 00:08:38.950 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:08:38.950 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:38.950 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:08:38.950 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:38.950 element at address: 0x20001987c540 with size: 0.250488 MiB 00:08:38.950 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:38.950 element at address: 0x2000002dbac0 with size: 0.125488 MiB 00:08:38.950 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_72952 00:08:38.950 element at address: 0x20000085ed00 with size: 0.125488 MiB 00:08:38.950 associated memzone info: size: 0.125366 MiB name: RG_ring_2_72952 00:08:38.950 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:08:38.950 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:38.950 element at address: 0x200028265d40 with size: 0.023743 MiB 00:08:38.950 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:38.950 element at address: 0x20000085aa40 with size: 0.016113 MiB 00:08:38.950 associated memzone info: size: 0.015991 MiB name: RG_ring_3_72952 00:08:38.950 element at address: 0x20002826be80 with size: 0.002441 MiB 00:08:38.950 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:38.950 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:08:38.950 associated memzone info: size: 0.000183 MiB name: MP_msgpool_72952 00:08:38.950 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:08:38.950 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_72952 00:08:38.950 element at address: 0x20000085a840 with size: 0.000305 MiB 00:08:38.950 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_72952 00:08:38.950 element at address: 0x20002826c940 with size: 0.000305 MiB 00:08:38.950 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:38.950 06:10:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:38.950 06:10:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 72952 00:08:38.950 06:10:58 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 72952 ']' 00:08:38.950 06:10:58 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 72952 00:08:38.950 06:10:58 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:08:38.950 06:10:58 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:38.950 06:10:58 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72952 00:08:38.950 killing process with pid 72952 00:08:38.950 06:10:58 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:38.950 06:10:58 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:38.950 06:10:58 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72952' 00:08:38.950 06:10:58 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 72952 00:08:38.950 06:10:58 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 72952 00:08:39.212 00:08:39.212 real 0m0.846s 00:08:39.212 user 0m0.832s 00:08:39.212 sys 0m0.275s 00:08:39.212 06:10:58 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.212 06:10:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:39.212 ************************************ 00:08:39.212 END TEST dpdk_mem_utility 00:08:39.212 ************************************ 00:08:39.212 06:10:58 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:39.212 06:10:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:39.212 06:10:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.212 06:10:58 -- common/autotest_common.sh@10 -- # set +x 00:08:39.212 ************************************ 00:08:39.212 START TEST event 00:08:39.212 ************************************ 00:08:39.212 06:10:58 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:39.212 * Looking for test storage... 00:08:39.212 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:39.212 06:10:58 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:39.212 06:10:58 event -- common/autotest_common.sh@1693 -- # lcov --version 00:08:39.212 06:10:58 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:39.212 06:10:58 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:39.212 06:10:58 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.212 06:10:58 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.212 06:10:58 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.212 06:10:58 event -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.212 06:10:58 event -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.212 06:10:58 event -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.212 06:10:58 event -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.212 06:10:58 event -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.212 06:10:58 event -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.212 06:10:58 event -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.212 06:10:58 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.212 06:10:58 event -- scripts/common.sh@344 -- # case "$op" in 00:08:39.212 06:10:58 event -- scripts/common.sh@345 -- # : 1 00:08:39.212 06:10:58 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.212 06:10:58 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.212 06:10:58 event -- scripts/common.sh@365 -- # decimal 1 00:08:39.212 06:10:58 event -- scripts/common.sh@353 -- # local d=1 00:08:39.212 06:10:58 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.212 06:10:58 event -- scripts/common.sh@355 -- # echo 1 00:08:39.212 06:10:58 event -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.212 06:10:58 event -- scripts/common.sh@366 -- # decimal 2 00:08:39.212 06:10:58 event -- scripts/common.sh@353 -- # local d=2 00:08:39.212 06:10:58 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.212 06:10:58 event -- scripts/common.sh@355 -- # echo 2 00:08:39.212 06:10:58 event -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.212 06:10:58 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.212 06:10:58 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.212 06:10:58 event -- scripts/common.sh@368 -- # return 0 00:08:39.212 06:10:58 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.212 06:10:58 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:39.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.212 --rc genhtml_branch_coverage=1 00:08:39.212 --rc genhtml_function_coverage=1 00:08:39.212 --rc genhtml_legend=1 00:08:39.212 --rc geninfo_all_blocks=1 00:08:39.212 --rc geninfo_unexecuted_blocks=1 00:08:39.212 00:08:39.212 ' 00:08:39.212 06:10:58 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:39.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.212 --rc genhtml_branch_coverage=1 00:08:39.212 --rc genhtml_function_coverage=1 00:08:39.212 --rc genhtml_legend=1 00:08:39.212 --rc geninfo_all_blocks=1 00:08:39.212 --rc geninfo_unexecuted_blocks=1 00:08:39.212 00:08:39.212 ' 00:08:39.212 06:10:58 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:39.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.212 --rc genhtml_branch_coverage=1 00:08:39.212 --rc genhtml_function_coverage=1 00:08:39.212 --rc genhtml_legend=1 00:08:39.212 --rc geninfo_all_blocks=1 00:08:39.212 --rc geninfo_unexecuted_blocks=1 00:08:39.212 00:08:39.212 ' 00:08:39.212 06:10:58 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:39.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.212 --rc genhtml_branch_coverage=1 00:08:39.212 --rc genhtml_function_coverage=1 00:08:39.212 --rc genhtml_legend=1 00:08:39.212 --rc geninfo_all_blocks=1 00:08:39.212 --rc geninfo_unexecuted_blocks=1 00:08:39.212 00:08:39.212 ' 00:08:39.212 06:10:58 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:39.212 06:10:58 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:39.212 06:10:58 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:39.212 06:10:58 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:39.212 06:10:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.212 06:10:58 event -- common/autotest_common.sh@10 -- # set +x 00:08:39.212 ************************************ 00:08:39.212 START TEST event_perf 00:08:39.212 ************************************ 00:08:39.212 06:10:58 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:39.212 Running I/O for 1 seconds...[2024-11-27 06:10:58.728238] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:08:39.212 [2024-11-27 06:10:58.728322] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73036 ] 00:08:39.473 [2024-11-27 06:10:58.838660] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:39.473 [2024-11-27 06:10:58.866922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:39.473 [2024-11-27 06:10:58.894896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.473 [2024-11-27 06:10:58.894984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.473 Running I/O for 1 seconds...[2024-11-27 06:10:58.895217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:39.473 [2024-11-27 06:10:58.895232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.406 00:08:40.406 lcore 0: 170132 00:08:40.406 lcore 1: 170132 00:08:40.406 lcore 2: 170130 00:08:40.407 lcore 3: 170130 00:08:40.407 done. 00:08:40.407 00:08:40.407 real 0m1.208s 00:08:40.407 user 0m4.057s 00:08:40.407 sys 0m0.029s 00:08:40.407 06:10:59 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.407 06:10:59 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:40.407 ************************************ 00:08:40.407 END TEST event_perf 00:08:40.407 ************************************ 00:08:40.665 06:10:59 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:40.665 06:10:59 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:40.665 06:10:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.665 06:10:59 event -- common/autotest_common.sh@10 -- # set +x 00:08:40.665 ************************************ 00:08:40.665 START TEST event_reactor 00:08:40.665 ************************************ 00:08:40.665 06:10:59 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:40.665 [2024-11-27 06:10:59.974896] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:08:40.665 [2024-11-27 06:10:59.974988] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73069 ] 00:08:40.665 [2024-11-27 06:11:00.092464] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:40.665 [2024-11-27 06:11:00.121378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.665 [2024-11-27 06:11:00.140607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.660 test_start 00:08:41.660 oneshot 00:08:41.660 tick 100 00:08:41.660 tick 100 00:08:41.660 tick 250 00:08:41.660 tick 100 00:08:41.660 tick 100 00:08:41.660 tick 100 00:08:41.660 tick 250 00:08:41.660 tick 500 00:08:41.660 tick 100 00:08:41.660 tick 100 00:08:41.660 tick 250 00:08:41.660 tick 100 00:08:41.660 tick 100 00:08:41.660 test_end 00:08:41.660 00:08:41.660 real 0m1.209s 00:08:41.660 user 0m1.070s 00:08:41.660 sys 0m0.033s 00:08:41.660 06:11:01 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.660 ************************************ 00:08:41.660 END TEST event_reactor 00:08:41.660 ************************************ 00:08:41.660 06:11:01 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:41.918 06:11:01 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:41.918 06:11:01 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:41.918 06:11:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.918 06:11:01 event -- common/autotest_common.sh@10 -- # set +x 00:08:41.918 ************************************ 00:08:41.918 START TEST event_reactor_perf 00:08:41.918 ************************************ 00:08:41.918 06:11:01 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:41.918 [2024-11-27 06:11:01.215010] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:08:41.918 [2024-11-27 06:11:01.215069] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73105 ] 00:08:41.918 [2024-11-27 06:11:01.326565] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:41.918 [2024-11-27 06:11:01.356769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.918 [2024-11-27 06:11:01.375491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.289 test_start 00:08:43.289 test_end 00:08:43.289 Performance: 388065 events per second 00:08:43.289 00:08:43.289 real 0m1.200s 00:08:43.289 user 0m1.064s 00:08:43.289 sys 0m0.030s 00:08:43.289 06:11:02 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.289 06:11:02 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:43.289 ************************************ 00:08:43.289 END TEST event_reactor_perf 00:08:43.289 ************************************ 00:08:43.289 06:11:02 event -- event/event.sh@49 -- # uname -s 00:08:43.289 06:11:02 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:43.289 06:11:02 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:43.289 06:11:02 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:43.289 06:11:02 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.289 06:11:02 event -- common/autotest_common.sh@10 -- # set +x 00:08:43.289 ************************************ 00:08:43.289 START TEST event_scheduler 00:08:43.289 ************************************ 00:08:43.289 06:11:02 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:43.289 * Looking for test storage... 00:08:43.289 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:43.289 06:11:02 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:43.289 06:11:02 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:08:43.289 06:11:02 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:43.289 06:11:02 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:43.289 06:11:02 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:43.289 06:11:02 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:43.289 06:11:02 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:43.290 06:11:02 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:08:43.290 06:11:02 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:08:43.290 06:11:02 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:08:43.290 06:11:02 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:08:43.290 06:11:02 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:08:43.290 06:11:02 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:08:43.290 06:11:02 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:08:43.290 06:11:02 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:43.290 06:11:02 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:08:43.290 06:11:02 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:08:43.290 06:11:02 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:43.290 06:11:02 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:43.290 06:11:02 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:08:43.290 06:11:02 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:08:43.290 06:11:02 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:43.290 06:11:02 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:08:43.290 06:11:02 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:08:43.290 06:11:02 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:08:43.290 06:11:02 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:08:43.290 06:11:02 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:43.290 06:11:02 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:08:43.290 06:11:02 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:08:43.290 06:11:02 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:43.290 06:11:02 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:43.290 06:11:02 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:08:43.290 06:11:02 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:43.290 06:11:02 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:43.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.290 --rc genhtml_branch_coverage=1 00:08:43.290 --rc genhtml_function_coverage=1 00:08:43.290 --rc genhtml_legend=1 00:08:43.290 --rc geninfo_all_blocks=1 00:08:43.290 --rc geninfo_unexecuted_blocks=1 00:08:43.290 00:08:43.290 ' 00:08:43.290 06:11:02 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:43.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.290 --rc genhtml_branch_coverage=1 00:08:43.290 --rc genhtml_function_coverage=1 00:08:43.290 --rc genhtml_legend=1 00:08:43.290 --rc geninfo_all_blocks=1 00:08:43.290 --rc geninfo_unexecuted_blocks=1 00:08:43.290 00:08:43.290 ' 00:08:43.290 06:11:02 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:43.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.290 --rc genhtml_branch_coverage=1 00:08:43.290 --rc genhtml_function_coverage=1 00:08:43.290 --rc genhtml_legend=1 00:08:43.290 --rc geninfo_all_blocks=1 00:08:43.290 --rc geninfo_unexecuted_blocks=1 00:08:43.290 00:08:43.290 ' 00:08:43.290 06:11:02 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:43.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.290 --rc genhtml_branch_coverage=1 00:08:43.290 --rc genhtml_function_coverage=1 00:08:43.290 --rc genhtml_legend=1 00:08:43.290 --rc geninfo_all_blocks=1 00:08:43.290 --rc geninfo_unexecuted_blocks=1 00:08:43.290 00:08:43.290 ' 00:08:43.290 06:11:02 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:43.290 06:11:02 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=73169 00:08:43.290 06:11:02 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:43.290 06:11:02 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 73169 00:08:43.290 06:11:02 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 73169 ']' 00:08:43.290 06:11:02 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.290 06:11:02 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:43.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.290 06:11:02 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.290 06:11:02 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:43.290 06:11:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:43.290 06:11:02 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:43.290 [2024-11-27 06:11:02.623185] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:08:43.290 [2024-11-27 06:11:02.623250] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73169 ] 00:08:43.290 [2024-11-27 06:11:02.737328] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:43.290 [2024-11-27 06:11:02.763074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:43.290 [2024-11-27 06:11:02.785282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.290 [2024-11-27 06:11:02.785337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.290 [2024-11-27 06:11:02.785447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:43.290 [2024-11-27 06:11:02.785860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:44.224 06:11:03 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:44.224 06:11:03 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:08:44.224 06:11:03 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:44.224 06:11:03 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.224 06:11:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:44.224 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:08:44.224 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:44.224 POWER: intel_pstate driver is not supported 00:08:44.224 POWER: cppc_cpufreq driver is not supported 00:08:44.224 POWER: amd-pstate driver is not supported 00:08:44.224 POWER: acpi-cpufreq driver is not supported 00:08:44.224 POWER: Unable to set Power Management Environment for lcore 0 00:08:44.224 [2024-11-27 06:11:03.506680] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:08:44.224 [2024-11-27 06:11:03.506874] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:08:44.224 [2024-11-27 06:11:03.506930] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:08:44.224 [2024-11-27 06:11:03.506974] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:44.224 [2024-11-27 06:11:03.507015] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:44.224 [2024-11-27 06:11:03.507049] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:44.224 06:11:03 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.224 06:11:03 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:44.224 06:11:03 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.224 06:11:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:44.224 [2024-11-27 06:11:03.560185] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:44.224 06:11:03 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.224 06:11:03 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:44.224 06:11:03 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:44.224 06:11:03 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.224 06:11:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:44.224 ************************************ 00:08:44.224 START TEST scheduler_create_thread 00:08:44.224 ************************************ 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:44.224 2 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:44.224 3 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:44.224 4 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:44.224 5 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:44.224 6 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:44.224 7 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:44.224 8 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:44.224 9 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:44.224 10 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.224 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:44.225 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.225 06:11:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:44.225 06:11:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:44.225 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.225 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:44.225 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.225 06:11:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:44.225 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.225 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:44.225 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.225 06:11:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:44.225 06:11:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:44.225 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.225 06:11:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:45.599 ************************************ 00:08:45.599 END TEST scheduler_create_thread 00:08:45.599 ************************************ 00:08:45.599 06:11:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.599 00:08:45.599 real 0m1.171s 00:08:45.599 user 0m0.014s 00:08:45.599 sys 0m0.004s 00:08:45.599 06:11:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.599 06:11:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:45.599 06:11:04 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:45.599 06:11:04 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 73169 00:08:45.599 06:11:04 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 73169 ']' 00:08:45.599 06:11:04 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 73169 00:08:45.599 06:11:04 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:08:45.599 06:11:04 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:45.599 06:11:04 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73169 00:08:45.599 killing process with pid 73169 00:08:45.599 06:11:04 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:45.599 06:11:04 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:45.599 06:11:04 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73169' 00:08:45.599 06:11:04 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 73169 00:08:45.599 06:11:04 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 73169 00:08:45.857 [2024-11-27 06:11:05.217793] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:45.857 ************************************ 00:08:45.857 END TEST event_scheduler 00:08:45.857 ************************************ 00:08:45.857 00:08:45.857 real 0m2.866s 00:08:45.857 user 0m5.217s 00:08:45.857 sys 0m0.280s 00:08:45.857 06:11:05 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.857 06:11:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:45.857 06:11:05 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:45.857 06:11:05 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:45.857 06:11:05 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:45.857 06:11:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.857 06:11:05 event -- common/autotest_common.sh@10 -- # set +x 00:08:45.857 ************************************ 00:08:45.857 START TEST app_repeat 00:08:45.857 ************************************ 00:08:45.857 06:11:05 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:08:45.857 06:11:05 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:45.857 06:11:05 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:45.857 06:11:05 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:45.857 06:11:05 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:45.858 06:11:05 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:45.858 06:11:05 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:45.858 06:11:05 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:45.858 06:11:05 event.app_repeat -- event/event.sh@19 -- # repeat_pid=73270 00:08:45.858 06:11:05 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:45.858 Process app_repeat pid: 73270 00:08:45.858 06:11:05 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 73270' 00:08:45.858 06:11:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:45.858 spdk_app_start Round 0 00:08:45.858 06:11:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:45.858 06:11:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 73270 /var/tmp/spdk-nbd.sock 00:08:45.858 06:11:05 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:45.858 06:11:05 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 73270 ']' 00:08:45.858 06:11:05 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:45.858 06:11:05 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:45.858 06:11:05 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:45.858 06:11:05 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.858 06:11:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:45.858 [2024-11-27 06:11:05.373701] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:08:45.858 [2024-11-27 06:11:05.373765] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73270 ] 00:08:46.115 [2024-11-27 06:11:05.485807] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:46.115 [2024-11-27 06:11:05.513771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:46.116 [2024-11-27 06:11:05.534930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.116 [2024-11-27 06:11:05.534945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.116 06:11:05 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:46.116 06:11:05 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:46.116 06:11:05 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:46.445 Malloc0 00:08:46.445 06:11:05 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:46.704 Malloc1 00:08:46.704 06:11:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:46.704 06:11:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:46.704 06:11:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:46.704 06:11:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:46.704 06:11:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:46.704 06:11:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:46.704 06:11:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:46.704 06:11:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:46.704 06:11:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:46.704 06:11:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:46.704 06:11:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:46.704 06:11:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:46.704 06:11:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:46.704 06:11:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:46.704 06:11:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:46.704 06:11:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:46.962 /dev/nbd0 00:08:46.962 06:11:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:46.962 06:11:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:46.962 06:11:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:46.962 06:11:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:46.962 06:11:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:46.962 06:11:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:46.962 06:11:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:46.962 06:11:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:46.962 06:11:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:46.962 06:11:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:46.962 06:11:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:46.962 1+0 records in 00:08:46.962 1+0 records out 00:08:46.962 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236833 s, 17.3 MB/s 00:08:46.962 06:11:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:46.962 06:11:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:46.962 06:11:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:46.962 06:11:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:46.962 06:11:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:46.962 06:11:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:46.962 06:11:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:46.963 06:11:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:47.221 /dev/nbd1 00:08:47.221 06:11:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:47.221 06:11:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:47.221 06:11:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:47.221 06:11:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:47.221 06:11:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:47.221 06:11:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:47.221 06:11:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:47.221 06:11:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:47.221 06:11:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:47.221 06:11:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:47.221 06:11:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:47.221 1+0 records in 00:08:47.221 1+0 records out 00:08:47.221 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275476 s, 14.9 MB/s 00:08:47.221 06:11:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:47.221 06:11:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:47.221 06:11:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:47.221 06:11:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:47.221 06:11:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:47.221 06:11:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:47.221 06:11:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:47.221 06:11:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:47.221 06:11:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:47.221 06:11:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:47.221 06:11:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:47.221 { 00:08:47.221 "bdev_name": "Malloc0", 00:08:47.221 "nbd_device": "/dev/nbd0" 00:08:47.221 }, 00:08:47.221 { 00:08:47.221 "bdev_name": "Malloc1", 00:08:47.221 "nbd_device": "/dev/nbd1" 00:08:47.221 } 00:08:47.221 ]' 00:08:47.221 06:11:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:47.221 06:11:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:47.221 { 00:08:47.221 "bdev_name": "Malloc0", 00:08:47.221 "nbd_device": "/dev/nbd0" 00:08:47.221 }, 00:08:47.221 { 00:08:47.221 "bdev_name": "Malloc1", 00:08:47.221 "nbd_device": "/dev/nbd1" 00:08:47.221 } 00:08:47.221 ]' 00:08:47.479 06:11:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:47.479 /dev/nbd1' 00:08:47.479 06:11:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:47.479 /dev/nbd1' 00:08:47.479 06:11:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:47.479 06:11:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:47.479 06:11:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:47.479 06:11:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:47.479 06:11:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:47.479 06:11:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:47.479 06:11:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:47.479 06:11:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:47.479 06:11:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:47.479 06:11:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:47.479 06:11:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:47.479 06:11:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:47.479 256+0 records in 00:08:47.479 256+0 records out 00:08:47.479 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113742 s, 92.2 MB/s 00:08:47.479 06:11:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:47.479 06:11:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:47.479 256+0 records in 00:08:47.479 256+0 records out 00:08:47.479 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0169029 s, 62.0 MB/s 00:08:47.479 06:11:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:47.479 06:11:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:47.479 256+0 records in 00:08:47.479 256+0 records out 00:08:47.479 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0180899 s, 58.0 MB/s 00:08:47.479 06:11:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:47.479 06:11:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:47.479 06:11:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:47.479 06:11:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:47.479 06:11:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:47.479 06:11:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:47.479 06:11:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:47.479 06:11:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:47.479 06:11:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:47.479 06:11:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:47.479 06:11:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:47.479 06:11:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:47.479 06:11:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:47.479 06:11:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:47.479 06:11:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:47.479 06:11:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:47.479 06:11:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:47.479 06:11:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:47.479 06:11:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:47.737 06:11:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:47.737 06:11:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:47.737 06:11:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:47.737 06:11:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:47.737 06:11:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:47.737 06:11:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:47.737 06:11:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:47.737 06:11:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:47.737 06:11:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:47.737 06:11:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:47.995 06:11:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:47.995 06:11:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:47.995 06:11:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:47.995 06:11:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:47.995 06:11:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:47.995 06:11:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:47.995 06:11:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:47.995 06:11:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:47.995 06:11:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:47.995 06:11:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:47.995 06:11:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:47.995 06:11:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:47.995 06:11:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:47.995 06:11:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:48.254 06:11:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:48.254 06:11:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:48.254 06:11:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:48.254 06:11:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:48.254 06:11:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:48.254 06:11:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:48.254 06:11:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:48.254 06:11:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:48.254 06:11:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:48.254 06:11:07 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:48.512 06:11:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:48.512 [2024-11-27 06:11:07.855350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:48.512 [2024-11-27 06:11:07.875207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.512 [2024-11-27 06:11:07.875213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.512 [2024-11-27 06:11:07.907222] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:48.512 [2024-11-27 06:11:07.907278] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:51.807 06:11:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:51.807 spdk_app_start Round 1 00:08:51.807 06:11:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:51.807 06:11:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 73270 /var/tmp/spdk-nbd.sock 00:08:51.807 06:11:10 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 73270 ']' 00:08:51.807 06:11:10 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:51.807 06:11:10 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:51.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:51.807 06:11:10 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:51.807 06:11:10 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:51.807 06:11:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:51.807 06:11:10 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:51.807 06:11:10 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:51.807 06:11:10 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:51.807 Malloc0 00:08:51.807 06:11:11 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:52.066 Malloc1 00:08:52.066 06:11:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:52.067 06:11:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:52.067 06:11:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:52.067 06:11:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:52.067 06:11:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:52.067 06:11:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:52.067 06:11:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:52.067 06:11:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:52.067 06:11:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:52.067 06:11:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:52.067 06:11:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:52.067 06:11:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:52.067 06:11:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:52.067 06:11:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:52.067 06:11:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:52.067 06:11:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:52.328 /dev/nbd0 00:08:52.328 06:11:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:52.328 06:11:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:52.328 06:11:11 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:52.328 06:11:11 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:52.328 06:11:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:52.329 06:11:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:52.329 06:11:11 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:52.329 06:11:11 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:52.329 06:11:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:52.329 06:11:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:52.329 06:11:11 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:52.329 1+0 records in 00:08:52.329 1+0 records out 00:08:52.329 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246548 s, 16.6 MB/s 00:08:52.329 06:11:11 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:52.329 06:11:11 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:52.329 06:11:11 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:52.329 06:11:11 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:52.329 06:11:11 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:52.329 06:11:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:52.329 06:11:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:52.329 06:11:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:52.590 /dev/nbd1 00:08:52.590 06:11:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:52.590 06:11:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:52.590 06:11:11 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:52.590 06:11:11 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:52.590 06:11:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:52.590 06:11:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:52.590 06:11:11 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:52.590 06:11:11 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:52.590 06:11:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:52.590 06:11:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:52.590 06:11:11 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:52.590 1+0 records in 00:08:52.590 1+0 records out 00:08:52.590 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198216 s, 20.7 MB/s 00:08:52.590 06:11:11 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:52.590 06:11:11 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:52.590 06:11:11 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:52.590 06:11:11 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:52.590 06:11:11 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:52.590 06:11:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:52.590 06:11:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:52.590 06:11:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:52.590 06:11:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:52.590 06:11:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:52.854 06:11:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:52.854 { 00:08:52.854 "bdev_name": "Malloc0", 00:08:52.854 "nbd_device": "/dev/nbd0" 00:08:52.854 }, 00:08:52.854 { 00:08:52.854 "bdev_name": "Malloc1", 00:08:52.854 "nbd_device": "/dev/nbd1" 00:08:52.854 } 00:08:52.854 ]' 00:08:52.854 06:11:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:52.854 { 00:08:52.854 "bdev_name": "Malloc0", 00:08:52.854 "nbd_device": "/dev/nbd0" 00:08:52.854 }, 00:08:52.854 { 00:08:52.854 "bdev_name": "Malloc1", 00:08:52.854 "nbd_device": "/dev/nbd1" 00:08:52.854 } 00:08:52.854 ]' 00:08:52.854 06:11:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:52.854 06:11:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:52.854 /dev/nbd1' 00:08:52.854 06:11:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:52.854 /dev/nbd1' 00:08:52.854 06:11:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:52.854 06:11:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:52.854 06:11:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:52.854 06:11:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:52.854 06:11:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:52.854 06:11:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:52.854 06:11:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:52.854 06:11:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:52.854 06:11:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:52.854 06:11:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:52.854 06:11:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:52.854 06:11:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:52.854 256+0 records in 00:08:52.854 256+0 records out 00:08:52.854 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00419577 s, 250 MB/s 00:08:52.854 06:11:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:52.854 06:11:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:52.854 256+0 records in 00:08:52.854 256+0 records out 00:08:52.854 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166682 s, 62.9 MB/s 00:08:52.854 06:11:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:52.854 06:11:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:52.854 256+0 records in 00:08:52.854 256+0 records out 00:08:52.854 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166867 s, 62.8 MB/s 00:08:52.854 06:11:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:52.854 06:11:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:52.854 06:11:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:52.854 06:11:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:52.854 06:11:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:52.854 06:11:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:52.854 06:11:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:52.854 06:11:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:52.854 06:11:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:52.854 06:11:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:52.854 06:11:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:52.854 06:11:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:52.855 06:11:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:52.855 06:11:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:52.855 06:11:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:52.855 06:11:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:52.855 06:11:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:52.855 06:11:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:52.855 06:11:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:53.115 06:11:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:53.115 06:11:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:53.115 06:11:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:53.115 06:11:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:53.115 06:11:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:53.115 06:11:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:53.115 06:11:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:53.115 06:11:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:53.115 06:11:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:53.115 06:11:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:53.386 06:11:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:53.386 06:11:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:53.386 06:11:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:53.386 06:11:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:53.386 06:11:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:53.386 06:11:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:53.386 06:11:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:53.386 06:11:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:53.387 06:11:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:53.387 06:11:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:53.387 06:11:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:53.387 06:11:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:53.387 06:11:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:53.387 06:11:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:53.387 06:11:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:53.387 06:11:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:53.387 06:11:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:53.387 06:11:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:53.387 06:11:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:53.387 06:11:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:53.387 06:11:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:53.387 06:11:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:53.387 06:11:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:53.387 06:11:12 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:53.647 06:11:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:53.905 [2024-11-27 06:11:13.200167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:53.905 [2024-11-27 06:11:13.218960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.905 [2024-11-27 06:11:13.218961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.905 [2024-11-27 06:11:13.249854] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:53.905 [2024-11-27 06:11:13.249895] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:57.227 spdk_app_start Round 2 00:08:57.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:57.227 06:11:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:57.227 06:11:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:57.227 06:11:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 73270 /var/tmp/spdk-nbd.sock 00:08:57.227 06:11:16 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 73270 ']' 00:08:57.227 06:11:16 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:57.227 06:11:16 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:57.227 06:11:16 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:57.227 06:11:16 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:57.227 06:11:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:57.227 06:11:16 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:57.227 06:11:16 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:57.227 06:11:16 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:57.227 Malloc0 00:08:57.227 06:11:16 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:57.485 Malloc1 00:08:57.485 06:11:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:57.485 06:11:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:57.485 06:11:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:57.485 06:11:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:57.485 06:11:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:57.485 06:11:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:57.485 06:11:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:57.485 06:11:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:57.485 06:11:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:57.485 06:11:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:57.485 06:11:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:57.485 06:11:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:57.485 06:11:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:57.485 06:11:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:57.485 06:11:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:57.485 06:11:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:57.744 /dev/nbd0 00:08:57.744 06:11:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:57.744 06:11:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:57.744 06:11:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:57.744 06:11:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:57.744 06:11:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:57.744 06:11:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:57.744 06:11:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:57.744 06:11:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:57.744 06:11:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:57.744 06:11:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:57.744 06:11:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:57.744 1+0 records in 00:08:57.744 1+0 records out 00:08:57.744 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022315 s, 18.4 MB/s 00:08:57.744 06:11:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:57.744 06:11:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:57.744 06:11:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:57.744 06:11:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:57.744 06:11:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:57.744 06:11:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:57.744 06:11:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:57.744 06:11:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:58.002 /dev/nbd1 00:08:58.002 06:11:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:58.002 06:11:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:58.002 06:11:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:58.002 06:11:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:58.002 06:11:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:58.002 06:11:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:58.002 06:11:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:58.002 06:11:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:58.002 06:11:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:58.002 06:11:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:58.002 06:11:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:58.002 1+0 records in 00:08:58.002 1+0 records out 00:08:58.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320638 s, 12.8 MB/s 00:08:58.002 06:11:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:58.002 06:11:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:58.002 06:11:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:58.002 06:11:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:58.002 06:11:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:58.002 06:11:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:58.002 06:11:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:58.002 06:11:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:58.002 06:11:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.002 06:11:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:58.261 { 00:08:58.261 "bdev_name": "Malloc0", 00:08:58.261 "nbd_device": "/dev/nbd0" 00:08:58.261 }, 00:08:58.261 { 00:08:58.261 "bdev_name": "Malloc1", 00:08:58.261 "nbd_device": "/dev/nbd1" 00:08:58.261 } 00:08:58.261 ]' 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:58.261 { 00:08:58.261 "bdev_name": "Malloc0", 00:08:58.261 "nbd_device": "/dev/nbd0" 00:08:58.261 }, 00:08:58.261 { 00:08:58.261 "bdev_name": "Malloc1", 00:08:58.261 "nbd_device": "/dev/nbd1" 00:08:58.261 } 00:08:58.261 ]' 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:58.261 /dev/nbd1' 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:58.261 /dev/nbd1' 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:58.261 256+0 records in 00:08:58.261 256+0 records out 00:08:58.261 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0054505 s, 192 MB/s 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:58.261 256+0 records in 00:08:58.261 256+0 records out 00:08:58.261 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143926 s, 72.9 MB/s 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:58.261 256+0 records in 00:08:58.261 256+0 records out 00:08:58.261 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144176 s, 72.7 MB/s 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:58.261 06:11:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:58.519 06:11:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:58.519 06:11:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:58.519 06:11:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:58.519 06:11:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:58.519 06:11:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:58.519 06:11:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:58.519 06:11:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:58.519 06:11:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:58.519 06:11:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:58.519 06:11:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:58.778 06:11:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:58.778 06:11:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:58.778 06:11:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:58.778 06:11:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:58.778 06:11:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:58.778 06:11:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:58.778 06:11:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:58.778 06:11:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:58.778 06:11:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:58.778 06:11:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.778 06:11:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:59.036 06:11:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:59.036 06:11:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:59.036 06:11:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:59.036 06:11:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:59.036 06:11:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:59.036 06:11:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:59.036 06:11:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:59.036 06:11:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:59.036 06:11:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:59.036 06:11:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:59.036 06:11:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:59.036 06:11:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:59.036 06:11:18 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:59.293 06:11:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:59.293 [2024-11-27 06:11:18.655647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:59.293 [2024-11-27 06:11:18.673986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.293 [2024-11-27 06:11:18.673990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.293 [2024-11-27 06:11:18.703886] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:59.293 [2024-11-27 06:11:18.703933] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:02.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:02.581 06:11:21 event.app_repeat -- event/event.sh@38 -- # waitforlisten 73270 /var/tmp/spdk-nbd.sock 00:09:02.581 06:11:21 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 73270 ']' 00:09:02.581 06:11:21 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:02.581 06:11:21 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.582 06:11:21 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:02.582 06:11:21 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.582 06:11:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:02.582 06:11:21 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:02.582 06:11:21 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:02.582 06:11:21 event.app_repeat -- event/event.sh@39 -- # killprocess 73270 00:09:02.582 06:11:21 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 73270 ']' 00:09:02.582 06:11:21 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 73270 00:09:02.582 06:11:21 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:09:02.582 06:11:21 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:02.582 06:11:21 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73270 00:09:02.582 killing process with pid 73270 00:09:02.582 06:11:21 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:02.582 06:11:21 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:02.582 06:11:21 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73270' 00:09:02.582 06:11:21 event.app_repeat -- common/autotest_common.sh@973 -- # kill 73270 00:09:02.582 06:11:21 event.app_repeat -- common/autotest_common.sh@978 -- # wait 73270 00:09:02.582 spdk_app_start is called in Round 0. 00:09:02.582 Shutdown signal received, stop current app iteration 00:09:02.582 Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 reinitialization... 00:09:02.582 spdk_app_start is called in Round 1. 00:09:02.582 Shutdown signal received, stop current app iteration 00:09:02.582 Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 reinitialization... 00:09:02.582 spdk_app_start is called in Round 2. 00:09:02.582 Shutdown signal received, stop current app iteration 00:09:02.582 Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 reinitialization... 00:09:02.582 spdk_app_start is called in Round 3. 00:09:02.582 Shutdown signal received, stop current app iteration 00:09:02.582 ************************************ 00:09:02.582 END TEST app_repeat 00:09:02.582 ************************************ 00:09:02.582 06:11:21 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:02.582 06:11:21 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:02.582 00:09:02.582 real 0m16.588s 00:09:02.582 user 0m37.348s 00:09:02.582 sys 0m2.207s 00:09:02.582 06:11:21 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.582 06:11:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:02.582 06:11:21 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:02.582 06:11:21 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:02.582 06:11:21 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:02.582 06:11:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.582 06:11:21 event -- common/autotest_common.sh@10 -- # set +x 00:09:02.582 ************************************ 00:09:02.582 START TEST cpu_locks 00:09:02.582 ************************************ 00:09:02.582 06:11:21 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:02.582 * Looking for test storage... 00:09:02.582 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:02.582 06:11:22 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:02.582 06:11:22 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:09:02.582 06:11:22 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:02.582 06:11:22 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:02.582 06:11:22 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:02.582 06:11:22 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:02.582 06:11:22 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:02.582 06:11:22 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:09:02.582 06:11:22 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:09:02.582 06:11:22 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:09:02.582 06:11:22 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:09:02.582 06:11:22 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:09:02.582 06:11:22 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:09:02.582 06:11:22 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:09:02.582 06:11:22 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:02.582 06:11:22 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:09:02.582 06:11:22 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:09:02.582 06:11:22 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:02.582 06:11:22 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:02.840 06:11:22 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:09:02.840 06:11:22 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:09:02.840 06:11:22 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:02.840 06:11:22 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:09:02.840 06:11:22 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:09:02.840 06:11:22 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:09:02.840 06:11:22 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:09:02.840 06:11:22 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:02.841 06:11:22 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:09:02.841 06:11:22 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:09:02.841 06:11:22 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:02.841 06:11:22 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:02.841 06:11:22 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:09:02.841 06:11:22 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:02.841 06:11:22 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:02.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.841 --rc genhtml_branch_coverage=1 00:09:02.841 --rc genhtml_function_coverage=1 00:09:02.841 --rc genhtml_legend=1 00:09:02.841 --rc geninfo_all_blocks=1 00:09:02.841 --rc geninfo_unexecuted_blocks=1 00:09:02.841 00:09:02.841 ' 00:09:02.841 06:11:22 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:02.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.841 --rc genhtml_branch_coverage=1 00:09:02.841 --rc genhtml_function_coverage=1 00:09:02.841 --rc genhtml_legend=1 00:09:02.841 --rc geninfo_all_blocks=1 00:09:02.841 --rc geninfo_unexecuted_blocks=1 00:09:02.841 00:09:02.841 ' 00:09:02.841 06:11:22 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:02.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.841 --rc genhtml_branch_coverage=1 00:09:02.841 --rc genhtml_function_coverage=1 00:09:02.841 --rc genhtml_legend=1 00:09:02.841 --rc geninfo_all_blocks=1 00:09:02.841 --rc geninfo_unexecuted_blocks=1 00:09:02.841 00:09:02.841 ' 00:09:02.841 06:11:22 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:02.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.841 --rc genhtml_branch_coverage=1 00:09:02.841 --rc genhtml_function_coverage=1 00:09:02.841 --rc genhtml_legend=1 00:09:02.841 --rc geninfo_all_blocks=1 00:09:02.841 --rc geninfo_unexecuted_blocks=1 00:09:02.841 00:09:02.841 ' 00:09:02.841 06:11:22 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:02.841 06:11:22 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:02.841 06:11:22 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:02.841 06:11:22 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:02.841 06:11:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:02.841 06:11:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.841 06:11:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:02.841 ************************************ 00:09:02.841 START TEST default_locks 00:09:02.841 ************************************ 00:09:02.841 06:11:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:09:02.841 06:11:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=73868 00:09:02.841 06:11:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 73868 00:09:02.841 06:11:22 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 73868 ']' 00:09:02.841 06:11:22 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.841 06:11:22 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.841 06:11:22 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.841 06:11:22 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.841 06:11:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:02.841 06:11:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:02.841 [2024-11-27 06:11:22.187780] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:09:02.841 [2024-11-27 06:11:22.187880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73868 ] 00:09:02.841 [2024-11-27 06:11:22.310758] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:02.841 [2024-11-27 06:11:22.335186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.841 [2024-11-27 06:11:22.355376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.773 06:11:23 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.773 06:11:23 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:09:03.773 06:11:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 73868 00:09:03.774 06:11:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 73868 00:09:03.774 06:11:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:03.774 06:11:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 73868 00:09:03.774 06:11:23 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 73868 ']' 00:09:03.774 06:11:23 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 73868 00:09:03.774 06:11:23 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:09:03.774 06:11:23 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:03.774 06:11:23 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73868 00:09:03.774 06:11:23 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:03.774 killing process with pid 73868 00:09:03.774 06:11:23 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:03.774 06:11:23 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73868' 00:09:03.774 06:11:23 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 73868 00:09:03.774 06:11:23 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 73868 00:09:04.031 06:11:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 73868 00:09:04.031 06:11:23 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:09:04.031 06:11:23 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 73868 00:09:04.031 06:11:23 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:04.031 06:11:23 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:04.031 06:11:23 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:04.031 06:11:23 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:04.031 06:11:23 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 73868 00:09:04.031 06:11:23 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 73868 ']' 00:09:04.031 06:11:23 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.031 06:11:23 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:04.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.031 06:11:23 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.031 06:11:23 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:04.031 06:11:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:04.031 ERROR: process (pid: 73868) is no longer running 00:09:04.031 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (73868) - No such process 00:09:04.031 06:11:23 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:04.031 06:11:23 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:09:04.031 06:11:23 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:09:04.031 06:11:23 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:04.031 06:11:23 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:04.031 06:11:23 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:04.031 06:11:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:09:04.031 06:11:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:04.031 06:11:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:09:04.031 06:11:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:04.031 00:09:04.031 real 0m1.320s 00:09:04.031 user 0m1.431s 00:09:04.031 sys 0m0.340s 00:09:04.031 06:11:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.031 06:11:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:04.031 ************************************ 00:09:04.031 END TEST default_locks 00:09:04.031 ************************************ 00:09:04.031 06:11:23 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:04.031 06:11:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:04.031 06:11:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.031 06:11:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:04.031 ************************************ 00:09:04.031 START TEST default_locks_via_rpc 00:09:04.031 ************************************ 00:09:04.031 06:11:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:09:04.031 06:11:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=73926 00:09:04.031 06:11:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 73926 00:09:04.031 06:11:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 73926 ']' 00:09:04.031 06:11:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.031 06:11:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:04.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.032 06:11:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.032 06:11:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:04.032 06:11:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.032 06:11:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:04.032 [2024-11-27 06:11:23.531176] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:09:04.032 [2024-11-27 06:11:23.531245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73926 ] 00:09:04.289 [2024-11-27 06:11:23.644734] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:04.289 [2024-11-27 06:11:23.663477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.289 [2024-11-27 06:11:23.681758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.853 06:11:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:04.853 06:11:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:04.853 06:11:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:04.853 06:11:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.853 06:11:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.111 06:11:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.111 06:11:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:09:05.111 06:11:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:05.111 06:11:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:09:05.111 06:11:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:05.111 06:11:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:05.111 06:11:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.111 06:11:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.111 06:11:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.111 06:11:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 73926 00:09:05.111 06:11:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 73926 00:09:05.111 06:11:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:05.111 06:11:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 73926 00:09:05.111 06:11:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 73926 ']' 00:09:05.111 06:11:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 73926 00:09:05.111 06:11:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:09:05.111 06:11:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.111 06:11:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73926 00:09:05.368 killing process with pid 73926 00:09:05.369 06:11:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:05.369 06:11:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:05.369 06:11:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73926' 00:09:05.369 06:11:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 73926 00:09:05.369 06:11:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 73926 00:09:05.369 ************************************ 00:09:05.369 END TEST default_locks_via_rpc 00:09:05.369 ************************************ 00:09:05.369 00:09:05.369 real 0m1.339s 00:09:05.369 user 0m1.442s 00:09:05.369 sys 0m0.333s 00:09:05.369 06:11:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.369 06:11:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.369 06:11:24 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:05.369 06:11:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:05.369 06:11:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.369 06:11:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:05.369 ************************************ 00:09:05.369 START TEST non_locking_app_on_locked_coremask 00:09:05.369 ************************************ 00:09:05.369 06:11:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:09:05.369 06:11:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=73990 00:09:05.369 06:11:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 73990 /var/tmp/spdk.sock 00:09:05.369 06:11:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 73990 ']' 00:09:05.369 06:11:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.369 06:11:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:05.369 06:11:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.369 06:11:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:05.369 06:11:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:05.369 06:11:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:05.636 [2024-11-27 06:11:24.914460] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:09:05.636 [2024-11-27 06:11:24.914512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73990 ] 00:09:05.636 [2024-11-27 06:11:25.026443] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:05.636 [2024-11-27 06:11:25.053139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.636 [2024-11-27 06:11:25.071076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.270 06:11:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:06.270 06:11:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:06.270 06:11:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=74018 00:09:06.270 06:11:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 74018 /var/tmp/spdk2.sock 00:09:06.270 06:11:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 74018 ']' 00:09:06.270 06:11:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:06.270 06:11:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:06.270 06:11:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:06.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:06.270 06:11:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:06.270 06:11:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:06.270 06:11:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:06.529 [2024-11-27 06:11:25.835665] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:09:06.529 [2024-11-27 06:11:25.835727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74018 ] 00:09:06.529 [2024-11-27 06:11:25.948374] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:06.529 [2024-11-27 06:11:25.983811] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:06.529 [2024-11-27 06:11:25.983842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.529 [2024-11-27 06:11:26.021788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.469 06:11:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:07.469 06:11:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:07.469 06:11:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 73990 00:09:07.469 06:11:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 73990 00:09:07.469 06:11:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:07.730 06:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 73990 00:09:07.730 06:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 73990 ']' 00:09:07.730 06:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 73990 00:09:07.730 06:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:07.730 06:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.730 06:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73990 00:09:07.730 06:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:07.730 killing process with pid 73990 00:09:07.730 06:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:07.730 06:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73990' 00:09:07.730 06:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 73990 00:09:07.730 06:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 73990 00:09:07.992 06:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 74018 00:09:07.992 06:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 74018 ']' 00:09:07.992 06:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 74018 00:09:07.992 06:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:07.992 06:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.992 06:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74018 00:09:07.992 06:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:07.992 06:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:07.992 killing process with pid 74018 00:09:07.992 06:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74018' 00:09:07.992 06:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 74018 00:09:07.992 06:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 74018 00:09:08.254 00:09:08.254 real 0m2.752s 00:09:08.254 user 0m3.167s 00:09:08.254 sys 0m0.642s 00:09:08.254 06:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.254 06:11:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:08.254 ************************************ 00:09:08.254 END TEST non_locking_app_on_locked_coremask 00:09:08.254 ************************************ 00:09:08.254 06:11:27 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:08.254 06:11:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:08.254 06:11:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.254 06:11:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:08.254 ************************************ 00:09:08.254 START TEST locking_app_on_unlocked_coremask 00:09:08.254 ************************************ 00:09:08.254 06:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:09:08.254 06:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=74086 00:09:08.254 06:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 74086 /var/tmp/spdk.sock 00:09:08.254 06:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 74086 ']' 00:09:08.254 06:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:08.254 06:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.254 06:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.254 06:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.254 06:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.254 06:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:08.254 [2024-11-27 06:11:27.709060] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:09:08.254 [2024-11-27 06:11:27.709125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74086 ] 00:09:08.514 [2024-11-27 06:11:27.820637] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:08.514 [2024-11-27 06:11:27.839858] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:08.514 [2024-11-27 06:11:27.839917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.514 [2024-11-27 06:11:27.859456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.514 06:11:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.514 06:11:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:08.514 06:11:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=74095 00:09:08.514 06:11:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 74095 /var/tmp/spdk2.sock 00:09:08.514 06:11:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 74095 ']' 00:09:08.514 06:11:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:08.514 06:11:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:08.514 06:11:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:08.514 06:11:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.514 06:11:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:08.514 06:11:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:08.776 [2024-11-27 06:11:28.078286] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:09:08.776 [2024-11-27 06:11:28.078355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74095 ] 00:09:08.776 [2024-11-27 06:11:28.193451] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:08.776 [2024-11-27 06:11:28.225564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.776 [2024-11-27 06:11:28.260243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.718 06:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:09.718 06:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:09.718 06:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 74095 00:09:09.718 06:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 74095 00:09:09.718 06:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:09.978 06:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 74086 00:09:09.978 06:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 74086 ']' 00:09:09.978 06:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 74086 00:09:09.978 06:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:09.978 06:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:09.978 06:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74086 00:09:09.978 06:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:09.978 06:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:09.978 killing process with pid 74086 00:09:09.978 06:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74086' 00:09:09.978 06:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 74086 00:09:09.978 06:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 74086 00:09:10.238 06:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 74095 00:09:10.238 06:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 74095 ']' 00:09:10.238 06:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 74095 00:09:10.238 06:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:10.238 06:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:10.238 06:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74095 00:09:10.238 06:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:10.238 06:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:10.238 killing process with pid 74095 00:09:10.238 06:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74095' 00:09:10.238 06:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 74095 00:09:10.238 06:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 74095 00:09:10.498 00:09:10.498 real 0m2.266s 00:09:10.498 user 0m2.547s 00:09:10.498 sys 0m0.648s 00:09:10.498 06:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.498 06:11:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:10.498 ************************************ 00:09:10.498 END TEST locking_app_on_unlocked_coremask 00:09:10.498 ************************************ 00:09:10.498 06:11:29 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:10.498 06:11:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:10.498 06:11:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.498 06:11:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:10.498 ************************************ 00:09:10.498 START TEST locking_app_on_locked_coremask 00:09:10.498 ************************************ 00:09:10.498 06:11:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:09:10.498 06:11:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=74163 00:09:10.498 06:11:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 74163 /var/tmp/spdk.sock 00:09:10.498 06:11:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 74163 ']' 00:09:10.498 06:11:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.498 06:11:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:10.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.498 06:11:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.498 06:11:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:10.498 06:11:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:10.498 06:11:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:10.498 [2024-11-27 06:11:30.020886] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:09:10.498 [2024-11-27 06:11:30.020980] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74163 ] 00:09:10.761 [2024-11-27 06:11:30.136037] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:10.761 [2024-11-27 06:11:30.161564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.761 [2024-11-27 06:11:30.181298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.701 06:11:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:11.701 06:11:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:11.701 06:11:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=74191 00:09:11.701 06:11:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 74191 /var/tmp/spdk2.sock 00:09:11.701 06:11:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:11.701 06:11:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 74191 /var/tmp/spdk2.sock 00:09:11.702 06:11:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:11.702 06:11:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:11.702 06:11:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:11.702 06:11:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:11.702 06:11:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:11.702 06:11:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 74191 /var/tmp/spdk2.sock 00:09:11.702 06:11:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 74191 ']' 00:09:11.702 06:11:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:11.702 06:11:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:11.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:11.702 06:11:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:11.702 06:11:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:11.702 06:11:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:11.702 [2024-11-27 06:11:30.949323] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:09:11.702 [2024-11-27 06:11:30.949391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74191 ] 00:09:11.702 [2024-11-27 06:11:31.065406] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:11.702 [2024-11-27 06:11:31.097507] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 74163 has claimed it. 00:09:11.702 [2024-11-27 06:11:31.097546] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:12.277 ERROR: process (pid: 74191) is no longer running 00:09:12.277 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (74191) - No such process 00:09:12.277 06:11:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.277 06:11:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:12.277 06:11:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:12.277 06:11:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:12.277 06:11:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:12.277 06:11:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:12.277 06:11:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 74163 00:09:12.277 06:11:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:12.277 06:11:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 74163 00:09:12.538 06:11:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 74163 00:09:12.538 06:11:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 74163 ']' 00:09:12.538 06:11:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 74163 00:09:12.538 06:11:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:12.538 06:11:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:12.538 06:11:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74163 00:09:12.538 06:11:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:12.538 06:11:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:12.538 killing process with pid 74163 00:09:12.538 06:11:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74163' 00:09:12.538 06:11:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 74163 00:09:12.538 06:11:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 74163 00:09:12.538 00:09:12.538 real 0m2.055s 00:09:12.538 user 0m2.382s 00:09:12.538 sys 0m0.411s 00:09:12.538 06:11:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.538 06:11:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:12.538 ************************************ 00:09:12.538 END TEST locking_app_on_locked_coremask 00:09:12.538 ************************************ 00:09:12.538 06:11:32 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:12.538 06:11:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:12.538 06:11:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.538 06:11:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:12.798 ************************************ 00:09:12.798 START TEST locking_overlapped_coremask 00:09:12.798 ************************************ 00:09:12.798 06:11:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:09:12.798 06:11:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=74237 00:09:12.798 06:11:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 74237 /var/tmp/spdk.sock 00:09:12.798 06:11:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 74237 ']' 00:09:12.798 06:11:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.798 06:11:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:12.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.798 06:11:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:09:12.798 06:11:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.798 06:11:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:12.798 06:11:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:12.798 [2024-11-27 06:11:32.119055] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:09:12.798 [2024-11-27 06:11:32.119428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74237 ] 00:09:12.798 [2024-11-27 06:11:32.235883] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:12.798 [2024-11-27 06:11:32.256842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:12.798 [2024-11-27 06:11:32.279696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.798 [2024-11-27 06:11:32.282937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:12.798 [2024-11-27 06:11:32.282943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.734 06:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:13.734 06:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:13.734 06:11:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=74267 00:09:13.734 06:11:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 74267 /var/tmp/spdk2.sock 00:09:13.734 06:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:13.734 06:11:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:13.734 06:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 74267 /var/tmp/spdk2.sock 00:09:13.734 06:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:13.734 06:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:13.734 06:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:13.734 06:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:13.734 06:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 74267 /var/tmp/spdk2.sock 00:09:13.734 06:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 74267 ']' 00:09:13.734 06:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:13.734 06:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:13.734 06:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:13.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:13.734 06:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:13.734 06:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:13.734 [2024-11-27 06:11:33.059298] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:09:13.734 [2024-11-27 06:11:33.059371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74267 ] 00:09:13.734 [2024-11-27 06:11:33.174675] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:13.735 [2024-11-27 06:11:33.214406] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 74237 has claimed it. 00:09:13.735 [2024-11-27 06:11:33.214450] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:14.307 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (74267) - No such process 00:09:14.307 ERROR: process (pid: 74267) is no longer running 00:09:14.307 06:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:14.307 06:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:14.307 06:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:14.307 06:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:14.307 06:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:14.307 06:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:14.307 06:11:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:14.307 06:11:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:14.307 06:11:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:14.307 06:11:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:14.307 06:11:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 74237 00:09:14.307 06:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 74237 ']' 00:09:14.307 06:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 74237 00:09:14.307 06:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:09:14.307 06:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:14.307 06:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74237 00:09:14.307 06:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:14.307 06:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:14.307 killing process with pid 74237 00:09:14.307 06:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74237' 00:09:14.307 06:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 74237 00:09:14.307 06:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 74237 00:09:14.569 00:09:14.569 real 0m1.935s 00:09:14.569 user 0m5.648s 00:09:14.569 sys 0m0.308s 00:09:14.569 06:11:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.569 06:11:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:14.569 ************************************ 00:09:14.569 END TEST locking_overlapped_coremask 00:09:14.569 ************************************ 00:09:14.569 06:11:34 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:14.569 06:11:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:14.569 06:11:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.569 06:11:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:14.569 ************************************ 00:09:14.569 START TEST locking_overlapped_coremask_via_rpc 00:09:14.569 ************************************ 00:09:14.569 06:11:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:09:14.569 06:11:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=74313 00:09:14.569 06:11:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 74313 /var/tmp/spdk.sock 00:09:14.569 06:11:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:14.569 06:11:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 74313 ']' 00:09:14.569 06:11:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.569 06:11:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.569 06:11:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.569 06:11:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.569 06:11:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.569 [2024-11-27 06:11:34.091890] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:09:14.569 [2024-11-27 06:11:34.091968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74313 ] 00:09:14.831 [2024-11-27 06:11:34.206415] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:14.831 [2024-11-27 06:11:34.232139] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:14.831 [2024-11-27 06:11:34.232178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:14.831 [2024-11-27 06:11:34.253612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.831 [2024-11-27 06:11:34.253844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:14.831 [2024-11-27 06:11:34.253986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.773 06:11:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.773 06:11:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:15.773 06:11:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=74345 00:09:15.773 06:11:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:15.773 06:11:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 74345 /var/tmp/spdk2.sock 00:09:15.773 06:11:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 74345 ']' 00:09:15.773 06:11:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:15.773 06:11:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:15.773 06:11:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:15.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:15.773 06:11:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:15.773 06:11:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.773 [2024-11-27 06:11:35.011751] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:09:15.773 [2024-11-27 06:11:35.011820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74345 ] 00:09:15.773 [2024-11-27 06:11:35.132386] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:15.773 [2024-11-27 06:11:35.173106] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:15.773 [2024-11-27 06:11:35.173143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:15.773 [2024-11-27 06:11:35.213871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:15.773 [2024-11-27 06:11:35.213970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:15.773 [2024-11-27 06:11:35.213974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:16.344 06:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:16.344 06:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:16.344 06:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:16.344 06:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.344 06:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.344 06:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.344 06:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:16.344 06:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:16.344 06:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:16.344 06:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:16.344 06:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:16.344 06:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:16.344 06:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:16.344 06:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:16.344 06:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.344 06:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.344 [2024-11-27 06:11:35.864018] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 74313 has claimed it. 00:09:16.344 2024/11/27 06:11:35 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:09:16.344 request: 00:09:16.344 { 00:09:16.606 "method": "framework_enable_cpumask_locks", 00:09:16.606 "params": {} 00:09:16.606 } 00:09:16.606 Got JSON-RPC error response 00:09:16.606 GoRPCClient: error on JSON-RPC call 00:09:16.606 06:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:16.606 06:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:16.606 06:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:16.606 06:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:16.606 06:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:16.606 06:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 74313 /var/tmp/spdk.sock 00:09:16.606 06:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 74313 ']' 00:09:16.606 06:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.606 06:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:16.606 06:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.606 06:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:16.606 06:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.606 06:11:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:16.606 06:11:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:16.606 06:11:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 74345 /var/tmp/spdk2.sock 00:09:16.606 06:11:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 74345 ']' 00:09:16.606 06:11:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:16.606 06:11:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:16.606 06:11:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:16.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:16.606 06:11:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:16.606 06:11:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.867 06:11:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:16.867 06:11:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:16.867 06:11:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:16.867 06:11:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:16.867 06:11:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:16.867 06:11:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:16.867 00:09:16.867 real 0m2.223s 00:09:16.867 user 0m1.018s 00:09:16.867 sys 0m0.148s 00:09:16.867 06:11:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.867 06:11:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.867 ************************************ 00:09:16.867 END TEST locking_overlapped_coremask_via_rpc 00:09:16.867 ************************************ 00:09:16.867 06:11:36 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:16.867 06:11:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 74313 ]] 00:09:16.867 06:11:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 74313 00:09:16.867 06:11:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 74313 ']' 00:09:16.867 06:11:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 74313 00:09:16.867 06:11:36 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:16.867 06:11:36 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:16.867 06:11:36 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74313 00:09:16.867 06:11:36 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:16.867 06:11:36 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:16.867 06:11:36 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74313' 00:09:16.867 killing process with pid 74313 00:09:16.867 06:11:36 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 74313 00:09:16.867 06:11:36 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 74313 00:09:17.128 06:11:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 74345 ]] 00:09:17.128 06:11:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 74345 00:09:17.128 06:11:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 74345 ']' 00:09:17.128 06:11:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 74345 00:09:17.128 06:11:36 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:17.128 06:11:36 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:17.128 06:11:36 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74345 00:09:17.128 06:11:36 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:17.128 06:11:36 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:17.128 killing process with pid 74345 00:09:17.128 06:11:36 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74345' 00:09:17.128 06:11:36 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 74345 00:09:17.128 06:11:36 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 74345 00:09:17.390 06:11:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:17.390 06:11:36 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:09:17.390 06:11:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 74313 ]] 00:09:17.390 06:11:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 74313 00:09:17.390 06:11:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 74313 ']' 00:09:17.390 06:11:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 74313 00:09:17.390 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (74313) - No such process 00:09:17.390 06:11:36 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 74313 is not found' 00:09:17.390 Process with pid 74313 is not found 00:09:17.390 06:11:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 74345 ]] 00:09:17.390 06:11:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 74345 00:09:17.390 06:11:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 74345 ']' 00:09:17.390 06:11:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 74345 00:09:17.390 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (74345) - No such process 00:09:17.390 Process with pid 74345 is not found 00:09:17.390 06:11:36 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 74345 is not found' 00:09:17.390 06:11:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:17.390 ************************************ 00:09:17.390 END TEST cpu_locks 00:09:17.390 ************************************ 00:09:17.390 00:09:17.390 real 0m14.754s 00:09:17.390 user 0m27.535s 00:09:17.390 sys 0m3.389s 00:09:17.390 06:11:36 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.390 06:11:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:17.390 ************************************ 00:09:17.390 END TEST event 00:09:17.390 ************************************ 00:09:17.390 00:09:17.390 real 0m38.201s 00:09:17.390 user 1m16.447s 00:09:17.390 sys 0m6.181s 00:09:17.390 06:11:36 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.390 06:11:36 event -- common/autotest_common.sh@10 -- # set +x 00:09:17.390 06:11:36 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:17.390 06:11:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:17.390 06:11:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.390 06:11:36 -- common/autotest_common.sh@10 -- # set +x 00:09:17.390 ************************************ 00:09:17.390 START TEST thread 00:09:17.390 ************************************ 00:09:17.390 06:11:36 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:17.390 * Looking for test storage... 00:09:17.390 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:09:17.390 06:11:36 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:17.390 06:11:36 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:09:17.390 06:11:36 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:17.390 06:11:36 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:17.390 06:11:36 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:17.390 06:11:36 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:17.390 06:11:36 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:17.390 06:11:36 thread -- scripts/common.sh@336 -- # IFS=.-: 00:09:17.390 06:11:36 thread -- scripts/common.sh@336 -- # read -ra ver1 00:09:17.390 06:11:36 thread -- scripts/common.sh@337 -- # IFS=.-: 00:09:17.390 06:11:36 thread -- scripts/common.sh@337 -- # read -ra ver2 00:09:17.390 06:11:36 thread -- scripts/common.sh@338 -- # local 'op=<' 00:09:17.390 06:11:36 thread -- scripts/common.sh@340 -- # ver1_l=2 00:09:17.390 06:11:36 thread -- scripts/common.sh@341 -- # ver2_l=1 00:09:17.390 06:11:36 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:17.390 06:11:36 thread -- scripts/common.sh@344 -- # case "$op" in 00:09:17.390 06:11:36 thread -- scripts/common.sh@345 -- # : 1 00:09:17.390 06:11:36 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:17.390 06:11:36 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:17.390 06:11:36 thread -- scripts/common.sh@365 -- # decimal 1 00:09:17.651 06:11:36 thread -- scripts/common.sh@353 -- # local d=1 00:09:17.651 06:11:36 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:17.651 06:11:36 thread -- scripts/common.sh@355 -- # echo 1 00:09:17.651 06:11:36 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:09:17.651 06:11:36 thread -- scripts/common.sh@366 -- # decimal 2 00:09:17.651 06:11:36 thread -- scripts/common.sh@353 -- # local d=2 00:09:17.651 06:11:36 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:17.651 06:11:36 thread -- scripts/common.sh@355 -- # echo 2 00:09:17.651 06:11:36 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:09:17.651 06:11:36 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:17.651 06:11:36 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:17.651 06:11:36 thread -- scripts/common.sh@368 -- # return 0 00:09:17.651 06:11:36 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:17.651 06:11:36 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:17.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.651 --rc genhtml_branch_coverage=1 00:09:17.651 --rc genhtml_function_coverage=1 00:09:17.651 --rc genhtml_legend=1 00:09:17.651 --rc geninfo_all_blocks=1 00:09:17.651 --rc geninfo_unexecuted_blocks=1 00:09:17.651 00:09:17.651 ' 00:09:17.651 06:11:36 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:17.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.651 --rc genhtml_branch_coverage=1 00:09:17.651 --rc genhtml_function_coverage=1 00:09:17.651 --rc genhtml_legend=1 00:09:17.651 --rc geninfo_all_blocks=1 00:09:17.651 --rc geninfo_unexecuted_blocks=1 00:09:17.651 00:09:17.651 ' 00:09:17.651 06:11:36 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:17.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.651 --rc genhtml_branch_coverage=1 00:09:17.651 --rc genhtml_function_coverage=1 00:09:17.651 --rc genhtml_legend=1 00:09:17.651 --rc geninfo_all_blocks=1 00:09:17.651 --rc geninfo_unexecuted_blocks=1 00:09:17.651 00:09:17.651 ' 00:09:17.651 06:11:36 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:17.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.651 --rc genhtml_branch_coverage=1 00:09:17.651 --rc genhtml_function_coverage=1 00:09:17.651 --rc genhtml_legend=1 00:09:17.651 --rc geninfo_all_blocks=1 00:09:17.651 --rc geninfo_unexecuted_blocks=1 00:09:17.651 00:09:17.651 ' 00:09:17.651 06:11:36 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:17.651 06:11:36 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:17.651 06:11:36 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.651 06:11:36 thread -- common/autotest_common.sh@10 -- # set +x 00:09:17.651 ************************************ 00:09:17.651 START TEST thread_poller_perf 00:09:17.651 ************************************ 00:09:17.651 06:11:36 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:17.651 [2024-11-27 06:11:36.953376] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:09:17.651 [2024-11-27 06:11:36.953446] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74494 ] 00:09:17.651 [2024-11-27 06:11:37.065092] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:17.651 [2024-11-27 06:11:37.088792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.651 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:17.651 [2024-11-27 06:11:37.106142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.033 [2024-11-27T06:11:38.563Z] ====================================== 00:09:19.033 [2024-11-27T06:11:38.563Z] busy:2607524800 (cyc) 00:09:19.033 [2024-11-27T06:11:38.563Z] total_run_count: 412000 00:09:19.033 [2024-11-27T06:11:38.563Z] tsc_hz: 2600000000 (cyc) 00:09:19.033 [2024-11-27T06:11:38.563Z] ====================================== 00:09:19.033 [2024-11-27T06:11:38.563Z] poller_cost: 6328 (cyc), 2433 (nsec) 00:09:19.033 00:09:19.033 real 0m1.199s 00:09:19.033 user 0m1.063s 00:09:19.033 sys 0m0.032s 00:09:19.033 06:11:38 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.033 06:11:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:19.033 ************************************ 00:09:19.033 END TEST thread_poller_perf 00:09:19.033 ************************************ 00:09:19.033 06:11:38 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:19.033 06:11:38 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:19.033 06:11:38 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.033 06:11:38 thread -- common/autotest_common.sh@10 -- # set +x 00:09:19.033 ************************************ 00:09:19.033 START TEST thread_poller_perf 00:09:19.033 ************************************ 00:09:19.033 06:11:38 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:19.033 [2024-11-27 06:11:38.191916] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:09:19.034 [2024-11-27 06:11:38.191971] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74524 ] 00:09:19.034 [2024-11-27 06:11:38.301134] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:19.034 [2024-11-27 06:11:38.325382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.034 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:19.034 [2024-11-27 06:11:38.341712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.977 [2024-11-27T06:11:39.507Z] ====================================== 00:09:19.977 [2024-11-27T06:11:39.507Z] busy:2601621240 (cyc) 00:09:19.977 [2024-11-27T06:11:39.507Z] total_run_count: 5134000 00:09:19.977 [2024-11-27T06:11:39.507Z] tsc_hz: 2600000000 (cyc) 00:09:19.977 [2024-11-27T06:11:39.507Z] ====================================== 00:09:19.977 [2024-11-27T06:11:39.507Z] poller_cost: 506 (cyc), 194 (nsec) 00:09:19.977 00:09:19.977 real 0m1.191s 00:09:19.977 user 0m1.062s 00:09:19.977 sys 0m0.024s 00:09:19.977 06:11:39 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.977 06:11:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:19.977 ************************************ 00:09:19.977 END TEST thread_poller_perf 00:09:19.977 ************************************ 00:09:19.977 06:11:39 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:19.977 00:09:19.977 real 0m2.600s 00:09:19.977 user 0m2.242s 00:09:19.977 sys 0m0.155s 00:09:19.977 06:11:39 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.977 06:11:39 thread -- common/autotest_common.sh@10 -- # set +x 00:09:19.977 ************************************ 00:09:19.977 END TEST thread 00:09:19.977 ************************************ 00:09:19.977 06:11:39 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:09:19.977 06:11:39 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:19.977 06:11:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:19.977 06:11:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.977 06:11:39 -- common/autotest_common.sh@10 -- # set +x 00:09:19.977 ************************************ 00:09:19.977 START TEST app_cmdline 00:09:19.977 ************************************ 00:09:19.977 06:11:39 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:19.977 * Looking for test storage... 00:09:20.239 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:20.239 06:11:39 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:20.239 06:11:39 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:09:20.239 06:11:39 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:20.239 06:11:39 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:20.239 06:11:39 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.239 06:11:39 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.239 06:11:39 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.239 06:11:39 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.239 06:11:39 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.239 06:11:39 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.239 06:11:39 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.239 06:11:39 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.239 06:11:39 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.239 06:11:39 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.239 06:11:39 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.239 06:11:39 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:09:20.239 06:11:39 app_cmdline -- scripts/common.sh@345 -- # : 1 00:09:20.239 06:11:39 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.239 06:11:39 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.239 06:11:39 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:09:20.239 06:11:39 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:09:20.239 06:11:39 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.239 06:11:39 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:09:20.239 06:11:39 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.239 06:11:39 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:09:20.239 06:11:39 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:09:20.239 06:11:39 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.239 06:11:39 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:09:20.239 06:11:39 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.239 06:11:39 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.239 06:11:39 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.239 06:11:39 app_cmdline -- scripts/common.sh@368 -- # return 0 00:09:20.239 06:11:39 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.239 06:11:39 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:20.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.239 --rc genhtml_branch_coverage=1 00:09:20.239 --rc genhtml_function_coverage=1 00:09:20.239 --rc genhtml_legend=1 00:09:20.239 --rc geninfo_all_blocks=1 00:09:20.239 --rc geninfo_unexecuted_blocks=1 00:09:20.239 00:09:20.239 ' 00:09:20.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.239 06:11:39 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:20.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.239 --rc genhtml_branch_coverage=1 00:09:20.239 --rc genhtml_function_coverage=1 00:09:20.239 --rc genhtml_legend=1 00:09:20.239 --rc geninfo_all_blocks=1 00:09:20.239 --rc geninfo_unexecuted_blocks=1 00:09:20.239 00:09:20.239 ' 00:09:20.239 06:11:39 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:20.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.239 --rc genhtml_branch_coverage=1 00:09:20.239 --rc genhtml_function_coverage=1 00:09:20.239 --rc genhtml_legend=1 00:09:20.239 --rc geninfo_all_blocks=1 00:09:20.239 --rc geninfo_unexecuted_blocks=1 00:09:20.239 00:09:20.239 ' 00:09:20.239 06:11:39 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:20.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.239 --rc genhtml_branch_coverage=1 00:09:20.239 --rc genhtml_function_coverage=1 00:09:20.239 --rc genhtml_legend=1 00:09:20.239 --rc geninfo_all_blocks=1 00:09:20.239 --rc geninfo_unexecuted_blocks=1 00:09:20.239 00:09:20.239 ' 00:09:20.239 06:11:39 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:20.239 06:11:39 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=74601 00:09:20.239 06:11:39 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 74601 00:09:20.239 06:11:39 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 74601 ']' 00:09:20.239 06:11:39 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.239 06:11:39 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:20.239 06:11:39 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.239 06:11:39 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:20.239 06:11:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:20.239 06:11:39 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:20.239 [2024-11-27 06:11:39.614870] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:09:20.239 [2024-11-27 06:11:39.614939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74601 ] 00:09:20.239 [2024-11-27 06:11:39.726719] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:20.239 [2024-11-27 06:11:39.751154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.500 [2024-11-27 06:11:39.769268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.072 06:11:40 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:21.072 06:11:40 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:09:21.072 06:11:40 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:09:21.333 { 00:09:21.333 "fields": { 00:09:21.333 "commit": "2f2acf4eb", 00:09:21.333 "major": 25, 00:09:21.333 "minor": 1, 00:09:21.333 "patch": 0, 00:09:21.333 "suffix": "-pre" 00:09:21.333 }, 00:09:21.333 "version": "SPDK v25.01-pre git sha1 2f2acf4eb" 00:09:21.333 } 00:09:21.333 06:11:40 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:21.333 06:11:40 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:21.333 06:11:40 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:21.333 06:11:40 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:21.333 06:11:40 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:21.333 06:11:40 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:21.333 06:11:40 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.333 06:11:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:21.333 06:11:40 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:21.333 06:11:40 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.333 06:11:40 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:21.333 06:11:40 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:21.333 06:11:40 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:21.333 06:11:40 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:09:21.333 06:11:40 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:21.333 06:11:40 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:21.333 06:11:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:21.333 06:11:40 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:21.333 06:11:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:21.333 06:11:40 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:21.333 06:11:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:21.333 06:11:40 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:21.333 06:11:40 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:21.333 06:11:40 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:21.594 2024/11/27 06:11:40 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:09:21.594 request: 00:09:21.594 { 00:09:21.594 "method": "env_dpdk_get_mem_stats", 00:09:21.594 "params": {} 00:09:21.594 } 00:09:21.594 Got JSON-RPC error response 00:09:21.594 GoRPCClient: error on JSON-RPC call 00:09:21.594 06:11:40 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:09:21.594 06:11:40 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:21.594 06:11:40 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:21.594 06:11:40 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:21.594 06:11:40 app_cmdline -- app/cmdline.sh@1 -- # killprocess 74601 00:09:21.594 06:11:40 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 74601 ']' 00:09:21.594 06:11:40 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 74601 00:09:21.594 06:11:40 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:09:21.594 06:11:40 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:21.594 06:11:40 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74601 00:09:21.594 06:11:40 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:21.594 killing process with pid 74601 00:09:21.594 06:11:40 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:21.594 06:11:40 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74601' 00:09:21.594 06:11:40 app_cmdline -- common/autotest_common.sh@973 -- # kill 74601 00:09:21.594 06:11:40 app_cmdline -- common/autotest_common.sh@978 -- # wait 74601 00:09:21.855 00:09:21.855 real 0m1.708s 00:09:21.855 user 0m2.152s 00:09:21.855 sys 0m0.318s 00:09:21.855 06:11:41 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.855 ************************************ 00:09:21.855 END TEST app_cmdline 00:09:21.855 ************************************ 00:09:21.855 06:11:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:21.855 06:11:41 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:21.855 06:11:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:21.855 06:11:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.855 06:11:41 -- common/autotest_common.sh@10 -- # set +x 00:09:21.855 ************************************ 00:09:21.855 START TEST version 00:09:21.855 ************************************ 00:09:21.855 06:11:41 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:21.855 * Looking for test storage... 00:09:21.855 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:21.855 06:11:41 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:21.855 06:11:41 version -- common/autotest_common.sh@1693 -- # lcov --version 00:09:21.855 06:11:41 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:21.855 06:11:41 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:21.855 06:11:41 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:21.855 06:11:41 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:21.855 06:11:41 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:21.855 06:11:41 version -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.855 06:11:41 version -- scripts/common.sh@336 -- # read -ra ver1 00:09:21.855 06:11:41 version -- scripts/common.sh@337 -- # IFS=.-: 00:09:21.855 06:11:41 version -- scripts/common.sh@337 -- # read -ra ver2 00:09:21.855 06:11:41 version -- scripts/common.sh@338 -- # local 'op=<' 00:09:21.855 06:11:41 version -- scripts/common.sh@340 -- # ver1_l=2 00:09:21.855 06:11:41 version -- scripts/common.sh@341 -- # ver2_l=1 00:09:21.855 06:11:41 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:21.855 06:11:41 version -- scripts/common.sh@344 -- # case "$op" in 00:09:21.855 06:11:41 version -- scripts/common.sh@345 -- # : 1 00:09:21.855 06:11:41 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:21.855 06:11:41 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.855 06:11:41 version -- scripts/common.sh@365 -- # decimal 1 00:09:21.855 06:11:41 version -- scripts/common.sh@353 -- # local d=1 00:09:21.855 06:11:41 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.855 06:11:41 version -- scripts/common.sh@355 -- # echo 1 00:09:21.855 06:11:41 version -- scripts/common.sh@365 -- # ver1[v]=1 00:09:21.855 06:11:41 version -- scripts/common.sh@366 -- # decimal 2 00:09:21.855 06:11:41 version -- scripts/common.sh@353 -- # local d=2 00:09:21.855 06:11:41 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.855 06:11:41 version -- scripts/common.sh@355 -- # echo 2 00:09:21.855 06:11:41 version -- scripts/common.sh@366 -- # ver2[v]=2 00:09:21.855 06:11:41 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:21.855 06:11:41 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:21.855 06:11:41 version -- scripts/common.sh@368 -- # return 0 00:09:21.855 06:11:41 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.855 06:11:41 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:21.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.855 --rc genhtml_branch_coverage=1 00:09:21.855 --rc genhtml_function_coverage=1 00:09:21.855 --rc genhtml_legend=1 00:09:21.855 --rc geninfo_all_blocks=1 00:09:21.855 --rc geninfo_unexecuted_blocks=1 00:09:21.855 00:09:21.855 ' 00:09:21.855 06:11:41 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:21.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.856 --rc genhtml_branch_coverage=1 00:09:21.856 --rc genhtml_function_coverage=1 00:09:21.856 --rc genhtml_legend=1 00:09:21.856 --rc geninfo_all_blocks=1 00:09:21.856 --rc geninfo_unexecuted_blocks=1 00:09:21.856 00:09:21.856 ' 00:09:21.856 06:11:41 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:21.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.856 --rc genhtml_branch_coverage=1 00:09:21.856 --rc genhtml_function_coverage=1 00:09:21.856 --rc genhtml_legend=1 00:09:21.856 --rc geninfo_all_blocks=1 00:09:21.856 --rc geninfo_unexecuted_blocks=1 00:09:21.856 00:09:21.856 ' 00:09:21.856 06:11:41 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:21.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.856 --rc genhtml_branch_coverage=1 00:09:21.856 --rc genhtml_function_coverage=1 00:09:21.856 --rc genhtml_legend=1 00:09:21.856 --rc geninfo_all_blocks=1 00:09:21.856 --rc geninfo_unexecuted_blocks=1 00:09:21.856 00:09:21.856 ' 00:09:21.856 06:11:41 version -- app/version.sh@17 -- # get_header_version major 00:09:21.856 06:11:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:21.856 06:11:41 version -- app/version.sh@14 -- # tr -d '"' 00:09:21.856 06:11:41 version -- app/version.sh@14 -- # cut -f2 00:09:21.856 06:11:41 version -- app/version.sh@17 -- # major=25 00:09:21.856 06:11:41 version -- app/version.sh@18 -- # get_header_version minor 00:09:21.856 06:11:41 version -- app/version.sh@14 -- # cut -f2 00:09:21.856 06:11:41 version -- app/version.sh@14 -- # tr -d '"' 00:09:21.856 06:11:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:21.856 06:11:41 version -- app/version.sh@18 -- # minor=1 00:09:21.856 06:11:41 version -- app/version.sh@19 -- # get_header_version patch 00:09:21.856 06:11:41 version -- app/version.sh@14 -- # cut -f2 00:09:21.856 06:11:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:21.856 06:11:41 version -- app/version.sh@14 -- # tr -d '"' 00:09:21.856 06:11:41 version -- app/version.sh@19 -- # patch=0 00:09:21.856 06:11:41 version -- app/version.sh@20 -- # get_header_version suffix 00:09:21.856 06:11:41 version -- app/version.sh@14 -- # cut -f2 00:09:21.856 06:11:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:21.856 06:11:41 version -- app/version.sh@14 -- # tr -d '"' 00:09:21.856 06:11:41 version -- app/version.sh@20 -- # suffix=-pre 00:09:21.856 06:11:41 version -- app/version.sh@22 -- # version=25.1 00:09:21.856 06:11:41 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:21.856 06:11:41 version -- app/version.sh@28 -- # version=25.1rc0 00:09:21.856 06:11:41 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:21.856 06:11:41 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:22.117 06:11:41 version -- app/version.sh@30 -- # py_version=25.1rc0 00:09:22.117 06:11:41 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:09:22.117 00:09:22.117 real 0m0.202s 00:09:22.117 user 0m0.138s 00:09:22.117 sys 0m0.091s 00:09:22.117 06:11:41 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.117 06:11:41 version -- common/autotest_common.sh@10 -- # set +x 00:09:22.117 ************************************ 00:09:22.117 END TEST version 00:09:22.117 ************************************ 00:09:22.117 06:11:41 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:09:22.117 06:11:41 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:09:22.117 06:11:41 -- spdk/autotest.sh@194 -- # uname -s 00:09:22.117 06:11:41 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:09:22.117 06:11:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:22.117 06:11:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:22.117 06:11:41 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:09:22.117 06:11:41 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:09:22.117 06:11:41 -- spdk/autotest.sh@260 -- # timing_exit lib 00:09:22.117 06:11:41 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:22.117 06:11:41 -- common/autotest_common.sh@10 -- # set +x 00:09:22.117 06:11:41 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:09:22.117 06:11:41 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:09:22.117 06:11:41 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:09:22.117 06:11:41 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:09:22.118 06:11:41 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:09:22.118 06:11:41 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:09:22.118 06:11:41 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:22.118 06:11:41 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:22.118 06:11:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.118 06:11:41 -- common/autotest_common.sh@10 -- # set +x 00:09:22.118 ************************************ 00:09:22.118 START TEST nvmf_tcp 00:09:22.118 ************************************ 00:09:22.118 06:11:41 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:22.118 * Looking for test storage... 00:09:22.118 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:22.118 06:11:41 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:22.118 06:11:41 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:22.118 06:11:41 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:22.118 06:11:41 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:22.118 06:11:41 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:22.118 06:11:41 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:22.118 06:11:41 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:22.118 06:11:41 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:22.118 06:11:41 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:22.118 06:11:41 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:22.118 06:11:41 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:22.118 06:11:41 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:22.118 06:11:41 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:22.118 06:11:41 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:22.118 06:11:41 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:22.118 06:11:41 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:22.118 06:11:41 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:09:22.118 06:11:41 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:22.118 06:11:41 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:22.118 06:11:41 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:22.118 06:11:41 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:09:22.118 06:11:41 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:22.118 06:11:41 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:09:22.118 06:11:41 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:22.118 06:11:41 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:22.118 06:11:41 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:09:22.118 06:11:41 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:22.118 06:11:41 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:09:22.118 06:11:41 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:22.118 06:11:41 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:22.118 06:11:41 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:22.118 06:11:41 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:09:22.118 06:11:41 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:22.118 06:11:41 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:22.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.118 --rc genhtml_branch_coverage=1 00:09:22.118 --rc genhtml_function_coverage=1 00:09:22.118 --rc genhtml_legend=1 00:09:22.118 --rc geninfo_all_blocks=1 00:09:22.118 --rc geninfo_unexecuted_blocks=1 00:09:22.118 00:09:22.118 ' 00:09:22.118 06:11:41 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:22.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.118 --rc genhtml_branch_coverage=1 00:09:22.118 --rc genhtml_function_coverage=1 00:09:22.118 --rc genhtml_legend=1 00:09:22.118 --rc geninfo_all_blocks=1 00:09:22.118 --rc geninfo_unexecuted_blocks=1 00:09:22.118 00:09:22.118 ' 00:09:22.118 06:11:41 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:22.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.118 --rc genhtml_branch_coverage=1 00:09:22.118 --rc genhtml_function_coverage=1 00:09:22.118 --rc genhtml_legend=1 00:09:22.118 --rc geninfo_all_blocks=1 00:09:22.118 --rc geninfo_unexecuted_blocks=1 00:09:22.118 00:09:22.118 ' 00:09:22.118 06:11:41 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:22.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.118 --rc genhtml_branch_coverage=1 00:09:22.118 --rc genhtml_function_coverage=1 00:09:22.118 --rc genhtml_legend=1 00:09:22.118 --rc geninfo_all_blocks=1 00:09:22.118 --rc geninfo_unexecuted_blocks=1 00:09:22.118 00:09:22.118 ' 00:09:22.118 06:11:41 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:09:22.118 06:11:41 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:22.118 06:11:41 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:22.118 06:11:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:22.118 06:11:41 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.118 06:11:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:22.379 ************************************ 00:09:22.379 START TEST nvmf_target_core 00:09:22.379 ************************************ 00:09:22.379 06:11:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:22.379 * Looking for test storage... 00:09:22.379 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:22.379 06:11:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:22.379 06:11:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:09:22.379 06:11:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:22.379 06:11:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:22.379 06:11:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:22.379 06:11:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:22.379 06:11:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:22.379 06:11:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:09:22.379 06:11:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:09:22.379 06:11:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:09:22.379 06:11:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:09:22.379 06:11:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:09:22.379 06:11:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:09:22.379 06:11:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:09:22.379 06:11:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:22.379 06:11:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:09:22.379 06:11:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:09:22.379 06:11:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:22.379 06:11:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:22.379 06:11:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:09:22.379 06:11:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:09:22.379 06:11:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:22.379 06:11:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:09:22.379 06:11:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:09:22.379 06:11:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:09:22.379 06:11:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:09:22.379 06:11:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:22.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.380 --rc genhtml_branch_coverage=1 00:09:22.380 --rc genhtml_function_coverage=1 00:09:22.380 --rc genhtml_legend=1 00:09:22.380 --rc geninfo_all_blocks=1 00:09:22.380 --rc geninfo_unexecuted_blocks=1 00:09:22.380 00:09:22.380 ' 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:22.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.380 --rc genhtml_branch_coverage=1 00:09:22.380 --rc genhtml_function_coverage=1 00:09:22.380 --rc genhtml_legend=1 00:09:22.380 --rc geninfo_all_blocks=1 00:09:22.380 --rc geninfo_unexecuted_blocks=1 00:09:22.380 00:09:22.380 ' 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:22.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.380 --rc genhtml_branch_coverage=1 00:09:22.380 --rc genhtml_function_coverage=1 00:09:22.380 --rc genhtml_legend=1 00:09:22.380 --rc geninfo_all_blocks=1 00:09:22.380 --rc geninfo_unexecuted_blocks=1 00:09:22.380 00:09:22.380 ' 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:22.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.380 --rc genhtml_branch_coverage=1 00:09:22.380 --rc genhtml_function_coverage=1 00:09:22.380 --rc genhtml_legend=1 00:09:22.380 --rc geninfo_all_blocks=1 00:09:22.380 --rc geninfo_unexecuted_blocks=1 00:09:22.380 00:09:22.380 ' 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:22.380 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:22.380 ************************************ 00:09:22.380 START TEST nvmf_abort 00:09:22.380 ************************************ 00:09:22.380 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:22.380 * Looking for test storage... 00:09:22.642 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:22.642 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:22.642 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:09:22.642 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:22.642 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:22.642 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:22.642 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:22.642 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:22.642 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:09:22.642 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:09:22.642 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:09:22.642 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:09:22.642 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:09:22.642 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:09:22.642 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:09:22.642 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:22.642 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:09:22.642 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:09:22.642 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:22.642 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:22.642 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:09:22.642 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:09:22.642 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:22.642 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:09:22.642 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:09:22.642 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:09:22.642 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:09:22.642 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:22.642 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:09:22.642 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:09:22.642 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:22.642 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:22.642 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:09:22.642 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:22.642 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:22.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.642 --rc genhtml_branch_coverage=1 00:09:22.642 --rc genhtml_function_coverage=1 00:09:22.642 --rc genhtml_legend=1 00:09:22.642 --rc geninfo_all_blocks=1 00:09:22.642 --rc geninfo_unexecuted_blocks=1 00:09:22.642 00:09:22.642 ' 00:09:22.643 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:22.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.643 --rc genhtml_branch_coverage=1 00:09:22.643 --rc genhtml_function_coverage=1 00:09:22.643 --rc genhtml_legend=1 00:09:22.643 --rc geninfo_all_blocks=1 00:09:22.643 --rc geninfo_unexecuted_blocks=1 00:09:22.643 00:09:22.643 ' 00:09:22.643 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:22.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.643 --rc genhtml_branch_coverage=1 00:09:22.643 --rc genhtml_function_coverage=1 00:09:22.643 --rc genhtml_legend=1 00:09:22.643 --rc geninfo_all_blocks=1 00:09:22.643 --rc geninfo_unexecuted_blocks=1 00:09:22.643 00:09:22.643 ' 00:09:22.643 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:22.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.643 --rc genhtml_branch_coverage=1 00:09:22.643 --rc genhtml_function_coverage=1 00:09:22.643 --rc genhtml_legend=1 00:09:22.643 --rc geninfo_all_blocks=1 00:09:22.643 --rc geninfo_unexecuted_blocks=1 00:09:22.643 00:09:22.643 ' 00:09:22.643 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:22.643 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:22.643 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.643 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.643 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.643 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.643 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.643 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.643 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.643 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.643 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.643 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.643 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:09:22.643 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:09:22.643 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.643 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.643 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:22.643 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.643 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:22.643 06:11:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:22.643 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:22.643 Cannot find device "nvmf_init_br" 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # true 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:22.643 Cannot find device "nvmf_init_br2" 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # true 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:22.643 Cannot find device "nvmf_tgt_br" 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # true 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:22.643 Cannot find device "nvmf_tgt_br2" 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # true 00:09:22.643 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:22.643 Cannot find device "nvmf_init_br" 00:09:22.644 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # true 00:09:22.644 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:22.644 Cannot find device "nvmf_init_br2" 00:09:22.644 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # true 00:09:22.644 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:22.644 Cannot find device "nvmf_tgt_br" 00:09:22.644 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # true 00:09:22.644 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:22.644 Cannot find device "nvmf_tgt_br2" 00:09:22.644 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # true 00:09:22.644 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:22.644 Cannot find device "nvmf_br" 00:09:22.644 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # true 00:09:22.644 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:22.644 Cannot find device "nvmf_init_if" 00:09:22.644 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # true 00:09:22.644 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:22.644 Cannot find device "nvmf_init_if2" 00:09:22.644 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # true 00:09:22.644 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:22.644 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:22.644 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # true 00:09:22.644 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:22.644 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:22.644 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # true 00:09:22.644 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:22.644 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:22.644 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:22.644 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:22.644 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:22.904 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:22.904 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:22.904 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:22.904 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:22.904 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:22.904 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:22.904 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:22.904 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:22.904 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:22.904 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:22.904 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:22.904 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:22.904 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:22.904 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:22.904 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:22.904 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:22.904 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:22.904 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:22.904 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:22.904 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:22.904 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:22.904 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:22.904 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:22.904 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:22.904 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:22.904 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:22.904 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:22.904 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:22.904 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:22.904 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:09:22.904 00:09:22.904 --- 10.0.0.3 ping statistics --- 00:09:22.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.904 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:09:22.904 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:22.904 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:22.904 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.032 ms 00:09:22.904 00:09:22.904 --- 10.0.0.4 ping statistics --- 00:09:22.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.904 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:09:22.904 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:22.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:22.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:09:22.904 00:09:22.904 --- 10.0.0.1 ping statistics --- 00:09:22.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.904 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:09:22.904 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:22.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:22.905 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:09:22.905 00:09:22.905 --- 10.0.0.2 ping statistics --- 00:09:22.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.905 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:09:22.905 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:22.905 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@461 -- # return 0 00:09:22.905 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:22.905 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:22.905 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:22.905 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:22.905 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:22.905 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:22.905 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:23.167 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:23.167 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:23.167 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:23.167 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:23.167 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=75034 00:09:23.167 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 75034 00:09:23.167 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 75034 ']' 00:09:23.167 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.167 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:23.167 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.167 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:23.167 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:23.167 06:11:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:23.167 [2024-11-27 06:11:42.491371] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:09:23.167 [2024-11-27 06:11:42.491428] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.167 [2024-11-27 06:11:42.607846] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:23.167 [2024-11-27 06:11:42.636584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:23.167 [2024-11-27 06:11:42.658779] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:23.167 [2024-11-27 06:11:42.658820] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:23.167 [2024-11-27 06:11:42.658826] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:23.167 [2024-11-27 06:11:42.658830] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:23.167 [2024-11-27 06:11:42.658835] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:23.167 [2024-11-27 06:11:42.659791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:23.167 [2024-11-27 06:11:42.660154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:23.167 [2024-11-27 06:11:42.660337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.108 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.108 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:09:24.108 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:24.108 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:24.108 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:24.108 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.108 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:24.108 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.108 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:24.108 [2024-11-27 06:11:43.425646] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:24.108 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.108 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:24.108 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.108 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:24.108 Malloc0 00:09:24.108 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.108 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:24.108 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.108 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:24.108 Delay0 00:09:24.108 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.108 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:24.108 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.108 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:24.108 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.108 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:24.108 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.108 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:24.108 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.108 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:24.108 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.108 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:24.108 [2024-11-27 06:11:43.484879] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:24.108 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.108 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:24.108 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.108 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:24.108 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.108 06:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:24.369 [2024-11-27 06:11:43.668655] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:26.283 Initializing NVMe Controllers 00:09:26.283 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:09:26.283 controller IO queue size 128 less than required 00:09:26.283 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:26.283 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:26.283 Initialization complete. Launching workers. 00:09:26.283 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 126, failed: 31279 00:09:26.283 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 31343, failed to submit 62 00:09:26.283 success 31283, unsuccessful 60, failed 0 00:09:26.283 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:26.283 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.283 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:26.283 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.283 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:26.283 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:26.283 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:26.283 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:09:26.283 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:26.283 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:09:26.283 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:26.283 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:26.283 rmmod nvme_tcp 00:09:26.283 rmmod nvme_fabrics 00:09:26.545 rmmod nvme_keyring 00:09:26.545 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:26.545 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:09:26.545 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:09:26.545 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 75034 ']' 00:09:26.545 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 75034 00:09:26.545 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 75034 ']' 00:09:26.545 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 75034 00:09:26.545 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:09:26.545 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:26.545 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75034 00:09:26.545 killing process with pid 75034 00:09:26.545 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:26.545 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:26.545 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75034' 00:09:26.545 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 75034 00:09:26.545 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 75034 00:09:26.545 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:26.545 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:26.545 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:26.545 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:09:26.545 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:09:26.545 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:26.545 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:09:26.545 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:26.545 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:26.545 06:11:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:26.545 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:26.545 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:26.545 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:26.545 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:26.545 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:26.545 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:26.545 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:26.545 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:26.807 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:26.807 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:26.807 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:26.807 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:26.807 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:26.807 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.807 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.807 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.807 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:09:26.807 00:09:26.807 real 0m4.401s 00:09:26.807 user 0m12.043s 00:09:26.807 sys 0m0.885s 00:09:26.807 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.807 ************************************ 00:09:26.807 END TEST nvmf_abort 00:09:26.807 ************************************ 00:09:26.807 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:26.807 06:11:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:26.807 06:11:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:26.807 06:11:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.807 06:11:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:26.807 ************************************ 00:09:26.807 START TEST nvmf_ns_hotplug_stress 00:09:26.807 ************************************ 00:09:26.807 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:27.070 * Looking for test storage... 00:09:27.070 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:27.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.070 --rc genhtml_branch_coverage=1 00:09:27.070 --rc genhtml_function_coverage=1 00:09:27.070 --rc genhtml_legend=1 00:09:27.070 --rc geninfo_all_blocks=1 00:09:27.070 --rc geninfo_unexecuted_blocks=1 00:09:27.070 00:09:27.070 ' 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:27.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.070 --rc genhtml_branch_coverage=1 00:09:27.070 --rc genhtml_function_coverage=1 00:09:27.070 --rc genhtml_legend=1 00:09:27.070 --rc geninfo_all_blocks=1 00:09:27.070 --rc geninfo_unexecuted_blocks=1 00:09:27.070 00:09:27.070 ' 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:27.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.070 --rc genhtml_branch_coverage=1 00:09:27.070 --rc genhtml_function_coverage=1 00:09:27.070 --rc genhtml_legend=1 00:09:27.070 --rc geninfo_all_blocks=1 00:09:27.070 --rc geninfo_unexecuted_blocks=1 00:09:27.070 00:09:27.070 ' 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:27.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.070 --rc genhtml_branch_coverage=1 00:09:27.070 --rc genhtml_function_coverage=1 00:09:27.070 --rc genhtml_legend=1 00:09:27.070 --rc geninfo_all_blocks=1 00:09:27.070 --rc geninfo_unexecuted_blocks=1 00:09:27.070 00:09:27.070 ' 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.070 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:27.071 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:27.071 Cannot find device "nvmf_init_br" 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:27.071 Cannot find device "nvmf_init_br2" 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:27.071 Cannot find device "nvmf_tgt_br" 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:27.071 Cannot find device "nvmf_tgt_br2" 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:27.071 Cannot find device "nvmf_init_br" 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:27.071 Cannot find device "nvmf_init_br2" 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:27.071 Cannot find device "nvmf_tgt_br" 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:27.071 Cannot find device "nvmf_tgt_br2" 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:27.071 Cannot find device "nvmf_br" 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:27.071 Cannot find device "nvmf_init_if" 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:27.071 Cannot find device "nvmf_init_if2" 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:27.071 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:27.071 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:27.071 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:27.333 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:27.333 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:27.333 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:27.333 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:27.333 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:27.333 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:27.333 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:27.333 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:27.333 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:27.333 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:27.333 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:27.333 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:27.333 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:27.333 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:27.333 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:27.333 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:27.333 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:27.333 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:27.333 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:27.333 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:27.333 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:27.333 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:27.333 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:27.333 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:27.333 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:27.333 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:27.333 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:27.333 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:27.333 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:27.333 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:27.333 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:27.333 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:27.333 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:09:27.333 00:09:27.333 --- 10.0.0.3 ping statistics --- 00:09:27.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.333 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:09:27.333 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:27.333 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:27.333 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:09:27.333 00:09:27.333 --- 10.0.0.4 ping statistics --- 00:09:27.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.333 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:09:27.333 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:27.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:27.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.015 ms 00:09:27.333 00:09:27.333 --- 10.0.0.1 ping statistics --- 00:09:27.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.333 rtt min/avg/max/mdev = 0.015/0.015/0.015/0.000 ms 00:09:27.333 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:27.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:27.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms 00:09:27.333 00:09:27.333 --- 10.0.0.2 ping statistics --- 00:09:27.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.333 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:09:27.334 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:27.334 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@461 -- # return 0 00:09:27.334 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:27.334 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:27.334 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:27.334 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:27.334 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:27.334 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:27.334 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:27.334 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:27.334 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:27.334 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:27.334 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.334 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=75348 00:09:27.334 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:27.334 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 75348 00:09:27.334 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 75348 ']' 00:09:27.334 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.334 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:27.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.334 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.334 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:27.334 06:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.334 [2024-11-27 06:11:46.819457] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:09:27.334 [2024-11-27 06:11:46.819518] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.595 [2024-11-27 06:11:46.934862] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:27.595 [2024-11-27 06:11:46.962389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:27.595 [2024-11-27 06:11:46.981219] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:27.595 [2024-11-27 06:11:46.981375] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:27.595 [2024-11-27 06:11:46.981436] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:27.595 [2024-11-27 06:11:46.981482] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:27.595 [2024-11-27 06:11:46.981500] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:27.595 [2024-11-27 06:11:46.982189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:27.595 [2024-11-27 06:11:46.982292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:27.595 [2024-11-27 06:11:46.982538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.539 06:11:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:28.539 06:11:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:09:28.539 06:11:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:28.539 06:11:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:28.539 06:11:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:28.539 06:11:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:28.539 06:11:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:28.539 06:11:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:28.539 [2024-11-27 06:11:47.928254] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:28.539 06:11:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:28.801 06:11:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:29.063 [2024-11-27 06:11:48.349155] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:29.063 06:11:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:29.063 06:11:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:29.325 Malloc0 00:09:29.325 06:11:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:29.644 Delay0 00:09:29.644 06:11:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:29.912 06:11:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:29.912 NULL1 00:09:29.912 06:11:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:30.175 06:11:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=75479 00:09:30.175 06:11:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75479 00:09:30.175 06:11:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:30.175 06:11:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:31.559 Read completed with error (sct=0, sc=11) 00:09:31.559 06:11:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:31.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.559 06:11:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:31.559 06:11:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:31.820 true 00:09:31.820 06:11:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75479 00:09:31.820 06:11:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.763 06:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.024 06:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:33.024 06:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:33.024 true 00:09:33.024 06:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75479 00:09:33.024 06:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.285 06:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.547 06:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:33.547 06:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:33.808 true 00:09:33.808 06:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75479 00:09:33.808 06:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.801 06:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:34.801 06:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:34.801 06:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:35.063 true 00:09:35.063 06:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75479 00:09:35.063 06:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.323 06:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:35.585 06:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:35.585 06:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:35.843 true 00:09:35.843 06:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75479 00:09:35.843 06:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.775 06:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.032 06:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:37.032 06:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:37.032 true 00:09:37.032 06:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75479 00:09:37.033 06:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.289 06:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.546 06:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:37.546 06:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:37.804 true 00:09:37.804 06:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75479 00:09:37.804 06:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.775 06:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.052 06:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:39.053 06:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:39.053 true 00:09:39.053 06:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75479 00:09:39.053 06:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.313 06:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.574 06:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:39.574 06:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:39.836 true 00:09:39.836 06:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75479 00:09:39.836 06:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.782 06:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.044 06:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:41.044 06:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:41.044 true 00:09:41.044 06:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75479 00:09:41.044 06:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.305 06:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.565 06:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:41.565 06:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:41.826 true 00:09:41.826 06:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75479 00:09:41.826 06:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.767 06:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:43.029 06:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:43.029 06:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:43.029 true 00:09:43.029 06:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75479 00:09:43.029 06:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.289 06:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:43.550 06:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:43.550 06:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:43.810 true 00:09:43.810 06:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75479 00:09:43.810 06:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.750 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.750 06:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.012 06:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:45.012 06:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:45.012 true 00:09:45.272 06:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75479 00:09:45.272 06:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.272 06:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.532 06:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:45.532 06:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:45.794 true 00:09:45.794 06:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75479 00:09:45.794 06:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.737 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:46.998 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:46.999 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:47.259 true 00:09:47.259 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75479 00:09:47.259 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.259 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:47.522 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:47.522 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:47.784 true 00:09:47.784 06:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75479 00:09:47.784 06:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.726 06:12:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:48.987 06:12:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:48.987 06:12:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:49.248 true 00:09:49.248 06:12:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75479 00:09:49.248 06:12:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.248 06:12:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:49.509 06:12:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:49.509 06:12:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:49.771 true 00:09:49.771 06:12:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75479 00:09:49.771 06:12:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.714 06:12:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.974 06:12:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:50.974 06:12:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:51.235 true 00:09:51.235 06:12:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75479 00:09:51.235 06:12:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.494 06:12:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.494 06:12:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:51.494 06:12:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:51.754 true 00:09:51.754 06:12:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75479 00:09:51.755 06:12:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.696 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:52.980 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:52.980 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:53.267 true 00:09:53.267 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75479 00:09:53.267 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.527 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.527 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:53.527 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:53.789 true 00:09:53.789 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75479 00:09:53.789 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.731 06:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.992 06:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:54.992 06:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:55.254 true 00:09:55.254 06:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75479 00:09:55.254 06:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.515 06:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:55.515 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:55.515 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:55.775 true 00:09:55.775 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75479 00:09:55.775 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.718 06:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.977 06:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:56.977 06:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:57.237 true 00:09:57.237 06:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75479 00:09:57.237 06:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.498 06:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.498 06:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:57.498 06:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:57.759 true 00:09:57.759 06:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75479 00:09:57.759 06:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.705 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.026 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:59.026 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:59.287 true 00:09:59.287 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75479 00:09:59.287 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:59.546 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.546 06:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:59.546 06:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:59.806 true 00:09:59.806 06:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75479 00:09:59.806 06:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.749 Initializing NVMe Controllers 00:10:00.749 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:10:00.749 Controller IO queue size 128, less than required. 00:10:00.749 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:00.749 Controller IO queue size 128, less than required. 00:10:00.749 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:00.749 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:00.749 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:00.749 Initialization complete. Launching workers. 00:10:00.749 ======================================================== 00:10:00.749 Latency(us) 00:10:00.749 Device Information : IOPS MiB/s Average min max 00:10:00.749 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 331.50 0.16 218527.22 3136.09 1092233.41 00:10:00.749 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 15299.83 7.47 8365.88 2518.12 499462.05 00:10:00.749 ======================================================== 00:10:00.749 Total : 15631.33 7.63 12822.86 2518.12 1092233.41 00:10:00.749 00:10:00.749 06:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.009 06:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:10:01.009 06:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:01.009 true 00:10:01.271 06:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 75479 00:10:01.271 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (75479) - No such process 00:10:01.271 06:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 75479 00:10:01.271 06:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.271 06:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:01.532 06:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:01.532 06:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:01.532 06:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:01.532 06:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:01.532 06:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:01.793 null0 00:10:01.793 06:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:01.793 06:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:01.793 06:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:02.053 null1 00:10:02.053 06:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:02.053 06:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:02.053 06:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:02.053 null2 00:10:02.053 06:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:02.053 06:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:02.053 06:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:02.314 null3 00:10:02.314 06:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:02.314 06:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:02.314 06:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:02.575 null4 00:10:02.575 06:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:02.575 06:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:02.575 06:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:02.837 null5 00:10:02.837 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:02.837 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:02.837 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:03.099 null6 00:10:03.099 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:03.099 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:03.099 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:03.099 null7 00:10:03.099 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:03.099 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:03.099 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:03.099 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:03.099 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:03.099 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:03.099 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:03.099 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:03.099 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:03.099 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:03.099 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:03.099 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:03.099 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.099 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:03.099 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:03.099 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:03.099 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:03.099 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:03.099 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.099 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:03.099 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:03.099 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:03.099 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 76537 76538 76541 76542 76543 76545 76548 76550 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.100 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:03.361 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:03.361 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:03.361 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:03.361 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:03.361 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.361 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:03.623 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:03.623 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.623 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.623 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:03.623 06:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:03.623 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.623 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.623 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:03.623 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.623 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.623 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:03.623 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.623 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.623 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:03.623 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.623 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.623 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:03.623 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.624 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.624 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:03.624 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.624 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.624 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:03.885 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:03.885 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.885 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.885 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:03.885 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:03.885 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.885 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:03.885 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:03.885 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:03.885 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:03.885 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:04.147 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.147 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.147 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:04.147 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.147 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.147 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:04.147 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.147 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.147 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:04.147 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.147 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.147 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:04.147 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.147 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.147 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:04.147 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.147 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.147 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:04.147 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.147 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.147 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:04.147 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:04.147 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:04.147 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.147 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.147 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:04.147 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.408 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:04.408 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:04.408 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:04.408 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.408 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.408 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:04.409 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:04.409 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:04.409 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.409 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.409 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:04.409 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.409 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.409 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:04.409 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.409 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.409 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:04.670 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:04.670 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.670 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.670 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:04.670 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.670 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.670 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:04.670 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.670 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.670 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:04.670 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:04.670 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.670 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.670 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:04.671 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.671 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.671 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:04.671 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.671 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:04.932 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:04.932 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:04.932 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:04.932 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.932 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.932 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:04.932 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:04.932 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:04.932 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.932 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.932 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:04.932 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.932 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.932 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:04.932 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.932 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.932 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:05.193 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.193 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.193 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:05.193 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.193 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.193 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:05.193 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.193 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.193 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:05.193 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:05.193 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.193 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.193 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:05.193 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:05.193 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.193 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:05.193 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:05.473 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:05.473 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.473 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.473 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:05.473 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:05.473 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:05.473 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.473 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.473 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:05.473 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.473 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.473 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:05.473 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.473 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.473 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:05.473 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.473 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.473 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:05.473 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:05.768 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.768 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.768 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:05.768 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.768 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.768 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:05.768 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.768 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:05.768 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.768 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.768 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:05.768 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:05.768 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:05.768 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.768 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.768 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:05.768 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.768 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.769 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:05.769 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:05.769 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:05.769 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.769 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.769 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:05.769 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:06.031 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.031 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.031 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:06.031 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.031 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.031 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.031 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:06.031 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:06.031 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.031 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.031 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:06.031 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.031 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.031 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:06.031 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.031 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.031 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:06.031 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:06.031 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.031 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.031 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:06.292 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:06.292 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:06.292 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.292 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.292 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:06.292 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:06.292 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.292 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.292 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:06.292 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:06.292 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:06.292 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.292 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.292 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:06.554 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.554 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.554 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:06.554 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.554 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.554 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:06.554 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.554 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:06.554 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:06.554 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.554 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.554 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:06.554 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.554 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.554 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:06.554 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:06.554 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:06.554 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:06.815 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.815 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.815 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:06.815 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.815 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.815 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:06.815 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:06.815 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:06.815 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.815 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.815 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:06.815 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.815 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.815 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:06.815 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.815 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.815 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:06.815 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.815 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.815 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:06.815 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.077 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.077 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.077 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:07.077 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:07.077 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:07.077 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.077 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.077 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:07.077 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:07.077 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.077 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.077 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:07.077 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:07.077 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:07.077 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:07.077 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.077 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.077 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:07.338 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:07.338 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.338 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.338 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:07.338 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.338 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.338 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:07.338 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.338 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.338 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:07.338 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.338 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.338 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:07.338 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.338 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:07.338 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.338 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.338 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:07.599 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.599 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.600 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:07.600 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:07.600 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:07.600 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:07.600 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.600 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.600 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:07.600 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.600 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.862 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.862 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.862 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.862 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.862 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.862 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.862 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.862 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.862 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.862 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.862 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:07.862 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:07.862 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:07.862 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:10:07.862 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:07.862 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:10:07.862 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:07.862 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:07.862 rmmod nvme_tcp 00:10:07.862 rmmod nvme_fabrics 00:10:07.862 rmmod nvme_keyring 00:10:07.862 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:07.862 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:10:07.862 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:10:07.862 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 75348 ']' 00:10:07.862 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 75348 00:10:07.862 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 75348 ']' 00:10:07.862 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 75348 00:10:07.862 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:10:07.862 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:07.862 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75348 00:10:07.862 killing process with pid 75348 00:10:07.862 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:07.862 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:07.862 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75348' 00:10:07.862 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 75348 00:10:07.862 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 75348 00:10:08.123 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:08.123 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:08.123 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:08.123 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:10:08.123 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:10:08.123 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:10:08.123 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:08.123 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:08.123 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:08.123 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:08.123 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:08.123 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:08.123 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:08.123 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:08.123 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:08.123 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:08.123 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:08.123 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:08.123 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:08.123 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:08.124 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:08.124 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:08.124 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:08.124 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.124 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.124 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:10:08.385 00:10:08.385 real 0m41.366s 00:10:08.385 user 3m14.577s 00:10:08.385 sys 0m9.466s 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:08.385 ************************************ 00:10:08.385 END TEST nvmf_ns_hotplug_stress 00:10:08.385 ************************************ 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:08.385 ************************************ 00:10:08.385 START TEST nvmf_delete_subsystem 00:10:08.385 ************************************ 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:08.385 * Looking for test storage... 00:10:08.385 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:08.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.385 --rc genhtml_branch_coverage=1 00:10:08.385 --rc genhtml_function_coverage=1 00:10:08.385 --rc genhtml_legend=1 00:10:08.385 --rc geninfo_all_blocks=1 00:10:08.385 --rc geninfo_unexecuted_blocks=1 00:10:08.385 00:10:08.385 ' 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:08.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.385 --rc genhtml_branch_coverage=1 00:10:08.385 --rc genhtml_function_coverage=1 00:10:08.385 --rc genhtml_legend=1 00:10:08.385 --rc geninfo_all_blocks=1 00:10:08.385 --rc geninfo_unexecuted_blocks=1 00:10:08.385 00:10:08.385 ' 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:08.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.385 --rc genhtml_branch_coverage=1 00:10:08.385 --rc genhtml_function_coverage=1 00:10:08.385 --rc genhtml_legend=1 00:10:08.385 --rc geninfo_all_blocks=1 00:10:08.385 --rc geninfo_unexecuted_blocks=1 00:10:08.385 00:10:08.385 ' 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:08.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.385 --rc genhtml_branch_coverage=1 00:10:08.385 --rc genhtml_function_coverage=1 00:10:08.385 --rc genhtml_legend=1 00:10:08.385 --rc geninfo_all_blocks=1 00:10:08.385 --rc geninfo_unexecuted_blocks=1 00:10:08.385 00:10:08.385 ' 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.385 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:08.386 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:08.386 Cannot find device "nvmf_init_br" 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:08.386 Cannot find device "nvmf_init_br2" 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:08.386 Cannot find device "nvmf_tgt_br" 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:08.386 Cannot find device "nvmf_tgt_br2" 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:10:08.386 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:08.647 Cannot find device "nvmf_init_br" 00:10:08.647 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:10:08.647 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:08.647 Cannot find device "nvmf_init_br2" 00:10:08.647 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:10:08.647 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:08.647 Cannot find device "nvmf_tgt_br" 00:10:08.647 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:10:08.647 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:08.647 Cannot find device "nvmf_tgt_br2" 00:10:08.647 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:10:08.647 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:08.647 Cannot find device "nvmf_br" 00:10:08.647 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:10:08.647 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:08.647 Cannot find device "nvmf_init_if" 00:10:08.647 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:10:08.647 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:08.647 Cannot find device "nvmf_init_if2" 00:10:08.647 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:10:08.647 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:08.647 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:08.647 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:10:08.647 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:08.647 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:08.647 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:10:08.647 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:08.647 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:08.647 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:08.648 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:08.648 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:08.648 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:10:08.648 00:10:08.648 --- 10.0.0.3 ping statistics --- 00:10:08.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.648 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:08.648 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:08.648 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.075 ms 00:10:08.648 00:10:08.648 --- 10.0.0.4 ping statistics --- 00:10:08.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.648 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:08.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:08.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.015 ms 00:10:08.648 00:10:08.648 --- 10.0.0.1 ping statistics --- 00:10:08.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.648 rtt min/avg/max/mdev = 0.015/0.015/0.015/0.000 ms 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:08.648 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:08.648 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.035 ms 00:10:08.648 00:10:08.648 --- 10.0.0.2 ping statistics --- 00:10:08.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.648 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@461 -- # return 0 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=77922 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 77922 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 77922 ']' 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.648 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:08.909 [2024-11-27 06:12:28.199306] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:10:08.909 [2024-11-27 06:12:28.199359] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.909 [2024-11-27 06:12:28.314177] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:08.909 [2024-11-27 06:12:28.343641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:08.909 [2024-11-27 06:12:28.361403] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:08.909 [2024-11-27 06:12:28.361443] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:08.909 [2024-11-27 06:12:28.361450] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:08.909 [2024-11-27 06:12:28.361455] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:08.909 [2024-11-27 06:12:28.361460] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:08.909 [2024-11-27 06:12:28.362097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.909 [2024-11-27 06:12:28.362315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.872 06:12:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:09.872 06:12:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:10:09.872 06:12:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:09.872 06:12:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:09.872 06:12:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:09.872 06:12:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:09.872 06:12:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:09.872 06:12:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.872 06:12:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:09.872 [2024-11-27 06:12:29.122288] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:09.872 06:12:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.872 06:12:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:09.872 06:12:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.872 06:12:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:09.872 06:12:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.872 06:12:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:09.872 06:12:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.872 06:12:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:09.872 [2024-11-27 06:12:29.138360] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:09.872 06:12:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.872 06:12:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:09.872 06:12:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.872 06:12:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:09.872 NULL1 00:10:09.872 06:12:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.872 06:12:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:09.872 06:12:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.872 06:12:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:09.872 Delay0 00:10:09.872 06:12:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.872 06:12:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.872 06:12:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.872 06:12:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:09.872 06:12:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.872 06:12:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=77973 00:10:09.872 06:12:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:09.872 06:12:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:09.872 [2024-11-27 06:12:29.332879] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:11.794 06:12:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:11.794 06:12:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.794 06:12:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 starting I/O failed: -6 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 starting I/O failed: -6 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 Write completed with error (sct=0, sc=8) 00:10:12.053 Write completed with error (sct=0, sc=8) 00:10:12.053 starting I/O failed: -6 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 Write completed with error (sct=0, sc=8) 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 starting I/O failed: -6 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 starting I/O failed: -6 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 Write completed with error (sct=0, sc=8) 00:10:12.053 Write completed with error (sct=0, sc=8) 00:10:12.053 Write completed with error (sct=0, sc=8) 00:10:12.053 starting I/O failed: -6 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 starting I/O failed: -6 00:10:12.053 Write completed with error (sct=0, sc=8) 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 Write completed with error (sct=0, sc=8) 00:10:12.053 starting I/O failed: -6 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 starting I/O failed: -6 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 Write completed with error (sct=0, sc=8) 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 starting I/O failed: -6 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 [2024-11-27 06:12:31.369335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f76c8000c40 is same with the state(6) to be set 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 Write completed with error (sct=0, sc=8) 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 Write completed with error (sct=0, sc=8) 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 Write completed with error (sct=0, sc=8) 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 Read completed with error (sct=0, sc=8) 00:10:12.053 starting I/O failed: -6 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 starting I/O failed: -6 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 starting I/O failed: -6 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 starting I/O failed: -6 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 starting I/O failed: -6 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 starting I/O failed: -6 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 starting I/O failed: -6 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 starting I/O failed: -6 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 starting I/O failed: -6 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 starting I/O failed: -6 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 starting I/O failed: -6 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 [2024-11-27 06:12:31.371441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa49b0 is same with the state(6) to be set 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Read completed with error (sct=0, sc=8) 00:10:12.054 Write completed with error (sct=0, sc=8) 00:10:13.075 [2024-11-27 06:12:32.346822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa17e0 is same with the state(6) to be set 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Write completed with error (sct=0, sc=8) 00:10:13.075 Write completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Write completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Write completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 [2024-11-27 06:12:32.365319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f76c800d020 is same with the state(6) to be set 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Write completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Write completed with error (sct=0, sc=8) 00:10:13.075 Write completed with error (sct=0, sc=8) 00:10:13.075 Write completed with error (sct=0, sc=8) 00:10:13.075 Write completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Write completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Write completed with error (sct=0, sc=8) 00:10:13.075 [2024-11-27 06:12:32.365460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f76c800d800 is same with the state(6) to be set 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Write completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Write completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Write completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Write completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Write completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Write completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Write completed with error (sct=0, sc=8) 00:10:13.075 Write completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Write completed with error (sct=0, sc=8) 00:10:13.075 [2024-11-27 06:12:32.366967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa4b90 is same with the state(6) to be set 00:10:13.075 Write completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Write completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Write completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.075 Read completed with error (sct=0, sc=8) 00:10:13.076 Read completed with error (sct=0, sc=8) 00:10:13.076 Read completed with error (sct=0, sc=8) 00:10:13.076 Read completed with error (sct=0, sc=8) 00:10:13.076 Read completed with error (sct=0, sc=8) 00:10:13.076 Write completed with error (sct=0, sc=8) 00:10:13.076 Read completed with error (sct=0, sc=8) 00:10:13.076 Write completed with error (sct=0, sc=8) 00:10:13.076 Read completed with error (sct=0, sc=8) 00:10:13.076 Read completed with error (sct=0, sc=8) 00:10:13.076 Read completed with error (sct=0, sc=8) 00:10:13.076 Write completed with error (sct=0, sc=8) 00:10:13.076 Write completed with error (sct=0, sc=8) 00:10:13.076 Initializing NVMe Controllers 00:10:13.076 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:10:13.076 Controller IO queue size 128, less than required. 00:10:13.076 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:13.076 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:13.076 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:13.076 Initialization complete. Launching workers. 00:10:13.076 ======================================================== 00:10:13.076 Latency(us) 00:10:13.076 Device Information : IOPS MiB/s Average min max 00:10:13.076 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.87 0.08 912447.36 321.40 2001454.11 00:10:13.076 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 163.92 0.08 908485.13 2496.16 1012406.46 00:10:13.076 ======================================================== 00:10:13.076 Total : 334.80 0.16 910507.40 321.40 2001454.11 00:10:13.076 00:10:13.076 Write completed with error (sct=0, sc=8) 00:10:13.076 [2024-11-27 06:12:32.367375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa3aa0 is same with the state(6) to be set 00:10:13.076 [2024-11-27 06:12:32.368117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa17e0 (9): Bad file descriptor 00:10:13.076 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:13.076 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.076 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:13.076 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 77973 00:10:13.076 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:13.648 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:13.649 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 77973 00:10:13.649 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (77973) - No such process 00:10:13.649 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 77973 00:10:13.649 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:10:13.649 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 77973 00:10:13.649 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:10:13.649 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:13.649 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:10:13.649 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:13.649 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 77973 00:10:13.649 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:10:13.649 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:13.649 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:13.649 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:13.649 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:13.649 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.649 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:13.649 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.649 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:13.649 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.649 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:13.649 [2024-11-27 06:12:32.889056] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:13.649 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.649 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.649 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.649 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:13.649 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.649 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=78024 00:10:13.649 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:13.649 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:13.649 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 78024 00:10:13.649 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:13.649 [2024-11-27 06:12:33.066537] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:13.911 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:13.911 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 78024 00:10:13.911 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:14.484 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:14.484 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 78024 00:10:14.484 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:15.084 06:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:15.084 06:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 78024 00:10:15.084 06:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:15.652 06:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:15.652 06:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 78024 00:10:15.652 06:12:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:15.909 06:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:15.909 06:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 78024 00:10:15.909 06:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:16.476 06:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:16.476 06:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 78024 00:10:16.476 06:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:16.734 Initializing NVMe Controllers 00:10:16.734 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:10:16.734 Controller IO queue size 128, less than required. 00:10:16.734 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:16.734 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:16.734 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:16.734 Initialization complete. Launching workers. 00:10:16.734 ======================================================== 00:10:16.734 Latency(us) 00:10:16.734 Device Information : IOPS MiB/s Average min max 00:10:16.734 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002585.43 1000075.50 1042481.02 00:10:16.734 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003855.05 1000100.62 1011362.80 00:10:16.734 ======================================================== 00:10:16.734 Total : 256.00 0.12 1003220.24 1000075.50 1042481.02 00:10:16.734 00:10:16.990 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:16.990 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 78024 00:10:16.990 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (78024) - No such process 00:10:16.990 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 78024 00:10:16.990 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:16.990 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:16.990 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:16.990 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:10:16.990 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:16.990 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:10:16.990 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:16.990 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:16.990 rmmod nvme_tcp 00:10:16.990 rmmod nvme_fabrics 00:10:17.250 rmmod nvme_keyring 00:10:17.250 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:17.250 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:10:17.250 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:10:17.250 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 77922 ']' 00:10:17.250 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 77922 00:10:17.250 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 77922 ']' 00:10:17.250 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 77922 00:10:17.250 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:10:17.250 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.250 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77922 00:10:17.250 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:17.250 killing process with pid 77922 00:10:17.250 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:17.250 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77922' 00:10:17.250 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 77922 00:10:17.250 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 77922 00:10:17.250 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:17.250 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:17.250 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:17.250 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:10:17.250 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:10:17.250 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:17.250 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:17.250 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:17.250 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:17.250 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:17.250 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:17.250 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:17.250 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:17.250 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:17.250 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:17.250 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:17.250 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:17.250 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:17.511 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:17.511 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:17.511 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:17.511 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:17.511 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:17.511 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.511 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.511 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.511 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:10:17.511 00:10:17.511 real 0m9.209s 00:10:17.511 user 0m28.503s 00:10:17.511 sys 0m1.303s 00:10:17.511 ************************************ 00:10:17.511 END TEST nvmf_delete_subsystem 00:10:17.511 ************************************ 00:10:17.511 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.511 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:17.511 06:12:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:17.511 06:12:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:17.511 06:12:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.511 06:12:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:17.511 ************************************ 00:10:17.511 START TEST nvmf_host_management 00:10:17.511 ************************************ 00:10:17.511 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:17.511 * Looking for test storage... 00:10:17.511 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:17.511 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:17.511 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:17.511 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:17.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.773 --rc genhtml_branch_coverage=1 00:10:17.773 --rc genhtml_function_coverage=1 00:10:17.773 --rc genhtml_legend=1 00:10:17.773 --rc geninfo_all_blocks=1 00:10:17.773 --rc geninfo_unexecuted_blocks=1 00:10:17.773 00:10:17.773 ' 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:17.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.773 --rc genhtml_branch_coverage=1 00:10:17.773 --rc genhtml_function_coverage=1 00:10:17.773 --rc genhtml_legend=1 00:10:17.773 --rc geninfo_all_blocks=1 00:10:17.773 --rc geninfo_unexecuted_blocks=1 00:10:17.773 00:10:17.773 ' 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:17.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.773 --rc genhtml_branch_coverage=1 00:10:17.773 --rc genhtml_function_coverage=1 00:10:17.773 --rc genhtml_legend=1 00:10:17.773 --rc geninfo_all_blocks=1 00:10:17.773 --rc geninfo_unexecuted_blocks=1 00:10:17.773 00:10:17.773 ' 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:17.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.773 --rc genhtml_branch_coverage=1 00:10:17.773 --rc genhtml_function_coverage=1 00:10:17.773 --rc genhtml_legend=1 00:10:17.773 --rc geninfo_all_blocks=1 00:10:17.773 --rc geninfo_unexecuted_blocks=1 00:10:17.773 00:10:17.773 ' 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.773 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:17.774 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:17.774 Cannot find device "nvmf_init_br" 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:17.774 Cannot find device "nvmf_init_br2" 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:17.774 Cannot find device "nvmf_tgt_br" 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:17.774 Cannot find device "nvmf_tgt_br2" 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:17.774 Cannot find device "nvmf_init_br" 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:17.774 Cannot find device "nvmf_init_br2" 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:17.774 Cannot find device "nvmf_tgt_br" 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:17.774 Cannot find device "nvmf_tgt_br2" 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:17.774 Cannot find device "nvmf_br" 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:17.774 Cannot find device "nvmf_init_if" 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:17.774 Cannot find device "nvmf_init_if2" 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:17.774 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:17.774 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:17.774 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:18.037 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:18.037 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:10:18.037 00:10:18.037 --- 10.0.0.3 ping statistics --- 00:10:18.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.037 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:18.037 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:18.037 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:10:18.037 00:10:18.037 --- 10.0.0.4 ping statistics --- 00:10:18.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.037 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:18.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:18.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:10:18.037 00:10:18.037 --- 10.0.0.1 ping statistics --- 00:10:18.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.037 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:18.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:18.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:10:18.037 00:10:18.037 --- 10.0.0.2 ping statistics --- 00:10:18.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.037 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=78301 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 78301 00:10:18.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 78301 ']' 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.037 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.038 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.038 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.038 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:18.038 [2024-11-27 06:12:37.507665] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:10:18.038 [2024-11-27 06:12:37.507723] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.299 [2024-11-27 06:12:37.626818] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:18.299 [2024-11-27 06:12:37.651846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:18.299 [2024-11-27 06:12:37.672371] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:18.299 [2024-11-27 06:12:37.672422] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:18.299 [2024-11-27 06:12:37.672429] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:18.299 [2024-11-27 06:12:37.672435] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:18.299 [2024-11-27 06:12:37.672441] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:18.299 [2024-11-27 06:12:37.673349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:18.299 [2024-11-27 06:12:37.673759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:18.299 [2024-11-27 06:12:37.673927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:18.299 [2024-11-27 06:12:37.673927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.299 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.299 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:10:18.299 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:18.299 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:18.299 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:18.299 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:18.299 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:18.299 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.299 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:18.299 [2024-11-27 06:12:37.783762] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:18.299 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.299 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:18.299 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:18.299 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:18.299 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:10:18.299 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:10:18.299 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:10:18.299 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.299 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:18.561 Malloc0 00:10:18.561 [2024-11-27 06:12:37.847984] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:18.561 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.561 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:18.561 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:18.561 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:18.561 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=78354 00:10:18.561 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 78354 /var/tmp/bdevperf.sock 00:10:18.561 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 78354 ']' 00:10:18.561 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:18.561 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.561 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:18.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:18.561 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.561 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:18.561 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:18.561 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:10:18.561 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:10:18.561 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:18.561 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:18.561 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:18.561 { 00:10:18.561 "params": { 00:10:18.561 "name": "Nvme$subsystem", 00:10:18.561 "trtype": "$TEST_TRANSPORT", 00:10:18.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:18.561 "adrfam": "ipv4", 00:10:18.561 "trsvcid": "$NVMF_PORT", 00:10:18.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:18.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:18.561 "hdgst": ${hdgst:-false}, 00:10:18.561 "ddgst": ${ddgst:-false} 00:10:18.561 }, 00:10:18.561 "method": "bdev_nvme_attach_controller" 00:10:18.561 } 00:10:18.561 EOF 00:10:18.561 )") 00:10:18.561 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:10:18.561 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:10:18.561 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:10:18.561 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:18.561 "params": { 00:10:18.561 "name": "Nvme0", 00:10:18.561 "trtype": "tcp", 00:10:18.561 "traddr": "10.0.0.3", 00:10:18.561 "adrfam": "ipv4", 00:10:18.561 "trsvcid": "4420", 00:10:18.561 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:18.561 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:18.561 "hdgst": false, 00:10:18.561 "ddgst": false 00:10:18.561 }, 00:10:18.561 "method": "bdev_nvme_attach_controller" 00:10:18.561 }' 00:10:18.561 [2024-11-27 06:12:37.925649] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:10:18.561 [2024-11-27 06:12:37.925706] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78354 ] 00:10:18.561 [2024-11-27 06:12:38.039705] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:18.822 [2024-11-27 06:12:38.117912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.822 [2024-11-27 06:12:38.150308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.822 Running I/O for 10 seconds... 00:10:19.399 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:19.399 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:10:19.399 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:19.399 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.399 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:19.399 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.399 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:19.399 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:19.399 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:19.399 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:19.399 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:10:19.399 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:10:19.399 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:19.399 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:19.399 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:19.399 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:19.399 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.399 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:19.399 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.399 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=963 00:10:19.399 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 963 -ge 100 ']' 00:10:19.399 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:10:19.399 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:10:19.399 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:10:19.399 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:19.399 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.399 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:19.399 [2024-11-27 06:12:38.862323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.399 [2024-11-27 06:12:38.862359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.399 [2024-11-27 06:12:38.862375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.399 [2024-11-27 06:12:38.862381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.399 [2024-11-27 06:12:38.862390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.399 [2024-11-27 06:12:38.862396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.399 [2024-11-27 06:12:38.862404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.399 [2024-11-27 06:12:38.862409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.399 [2024-11-27 06:12:38.862417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.399 [2024-11-27 06:12:38.862423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.399 [2024-11-27 06:12:38.862430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.399 [2024-11-27 06:12:38.862435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.399 [2024-11-27 06:12:38.862443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.399 [2024-11-27 06:12:38.862449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.399 [2024-11-27 06:12:38.862456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.399 [2024-11-27 06:12:38.862461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.399 [2024-11-27 06:12:38.862469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.399 [2024-11-27 06:12:38.862475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.399 [2024-11-27 06:12:38.862482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.399 [2024-11-27 06:12:38.862488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.399 [2024-11-27 06:12:38.862495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.399 [2024-11-27 06:12:38.862501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.399 [2024-11-27 06:12:38.862509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.399 [2024-11-27 06:12:38.862514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.399 [2024-11-27 06:12:38.862522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.399 [2024-11-27 06:12:38.862531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.399 [2024-11-27 06:12:38.862541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.399 [2024-11-27 06:12:38.862546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.399 [2024-11-27 06:12:38.862554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.399 [2024-11-27 06:12:38.862559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.399 [2024-11-27 06:12:38.862566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.399 [2024-11-27 06:12:38.862572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.399 [2024-11-27 06:12:38.862579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.399 [2024-11-27 06:12:38.862585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.399 [2024-11-27 06:12:38.862593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.399 [2024-11-27 06:12:38.862599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.399 [2024-11-27 06:12:38.862607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.399 [2024-11-27 06:12:38.862612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.399 [2024-11-27 06:12:38.862620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.399 [2024-11-27 06:12:38.862625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.399 [2024-11-27 06:12:38.862636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.399 [2024-11-27 06:12:38.862642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.399 [2024-11-27 06:12:38.862649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.399 [2024-11-27 06:12:38.862654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.399 [2024-11-27 06:12:38.862662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.399 [2024-11-27 06:12:38.862667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.399 [2024-11-27 06:12:38.862675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.399 [2024-11-27 06:12:38.862680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.862688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.862693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.862700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.862706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.862713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.862719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.862726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.862732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.862739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.862745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.862752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.862758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.862765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.862771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.862778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.862783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.862791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.862796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.862803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.862809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.862816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.862822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.862829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.862834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.862843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.862849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.862856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.862862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.862870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.862875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.862882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.862887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.862895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.862901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.862916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.862922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.862929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.862934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.862942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.862947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.862954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.862961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.862969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.862974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.862982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.862987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.862994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.862999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.863007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.863012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.863020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.863025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.863033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.863038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.863045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.863050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.863059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.863065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.863072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.863078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.863085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.863090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.863097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.863103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.863110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.863116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.863123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.863128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.863135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.863141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.863148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.863154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.863161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.863167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.863174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.863179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.863187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.863192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.400 [2024-11-27 06:12:38.863200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:19.400 [2024-11-27 06:12:38.863206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.401 [2024-11-27 06:12:38.863212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5cf30 is same with the state(6) to be set 00:10:19.401 [2024-11-27 06:12:38.864454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:10:19.401 task offset: 8064 on job bdev=Nvme0n1 fails 00:10:19.401 00:10:19.401 Latency(us) 00:10:19.401 [2024-11-27T06:12:38.931Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:19.401 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:19.401 Job: Nvme0n1 ended in about 0.55 seconds with error 00:10:19.401 Verification LBA range: start 0x0 length 0x400 00:10:19.401 Nvme0n1 : 0.55 1865.44 116.59 116.59 0.00 30478.38 4864.79 51017.26 00:10:19.401 [2024-11-27T06:12:38.931Z] =================================================================================================================== 00:10:19.401 [2024-11-27T06:12:38.931Z] Total : 1865.44 116.59 116.59 0.00 30478.38 4864.79 51017.26 00:10:19.401 [2024-11-27 06:12:38.866387] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:19.401 [2024-11-27 06:12:38.866411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae1d20 (9): Bad file descriptor 00:10:19.401 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.401 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:19.401 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.401 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:19.401 [2024-11-27 06:12:38.868507] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:10:19.401 [2024-11-27 06:12:38.868575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:10:19.401 [2024-11-27 06:12:38.868595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:19.401 [2024-11-27 06:12:38.868607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:10:19.401 [2024-11-27 06:12:38.868612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:10:19.401 [2024-11-27 06:12:38.868618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:10:19.401 [2024-11-27 06:12:38.868623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ae1d20 00:10:19.401 [2024-11-27 06:12:38.868644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae1d20 (9): Bad file descriptor 00:10:19.401 [2024-11-27 06:12:38.868654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:10:19.401 [2024-11-27 06:12:38.868663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:10:19.401 [2024-11-27 06:12:38.868670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:10:19.401 [2024-11-27 06:12:38.868676] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:10:19.401 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.401 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:10:20.787 06:12:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 78354 00:10:20.787 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (78354) - No such process 00:10:20.787 06:12:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:10:20.787 06:12:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:20.787 06:12:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:20.787 06:12:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:20.787 06:12:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:10:20.787 06:12:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:10:20.787 06:12:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:20.787 06:12:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:20.787 { 00:10:20.787 "params": { 00:10:20.787 "name": "Nvme$subsystem", 00:10:20.787 "trtype": "$TEST_TRANSPORT", 00:10:20.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:20.787 "adrfam": "ipv4", 00:10:20.787 "trsvcid": "$NVMF_PORT", 00:10:20.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:20.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:20.787 "hdgst": ${hdgst:-false}, 00:10:20.787 "ddgst": ${ddgst:-false} 00:10:20.787 }, 00:10:20.787 "method": "bdev_nvme_attach_controller" 00:10:20.787 } 00:10:20.787 EOF 00:10:20.787 )") 00:10:20.787 06:12:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:10:20.787 06:12:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:10:20.787 06:12:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:10:20.787 06:12:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:20.787 "params": { 00:10:20.787 "name": "Nvme0", 00:10:20.787 "trtype": "tcp", 00:10:20.787 "traddr": "10.0.0.3", 00:10:20.787 "adrfam": "ipv4", 00:10:20.787 "trsvcid": "4420", 00:10:20.787 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:20.787 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:20.787 "hdgst": false, 00:10:20.787 "ddgst": false 00:10:20.787 }, 00:10:20.787 "method": "bdev_nvme_attach_controller" 00:10:20.787 }' 00:10:20.787 [2024-11-27 06:12:39.921722] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:10:20.787 [2024-11-27 06:12:39.921786] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78403 ] 00:10:20.787 [2024-11-27 06:12:40.035276] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:20.787 [2024-11-27 06:12:40.064750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.787 [2024-11-27 06:12:40.083920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.787 Running I/O for 1 seconds... 00:10:21.984 2090.00 IOPS, 130.62 MiB/s 00:10:21.984 Latency(us) 00:10:21.984 [2024-11-27T06:12:41.514Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:21.984 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:21.984 Verification LBA range: start 0x0 length 0x400 00:10:21.984 Nvme0n1 : 1.03 2109.97 131.87 0.00 0.00 29785.20 3730.51 28432.54 00:10:21.984 [2024-11-27T06:12:41.514Z] =================================================================================================================== 00:10:21.984 [2024-11-27T06:12:41.514Z] Total : 2109.97 131.87 0.00 0.00 29785.20 3730.51 28432.54 00:10:21.984 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:10:21.984 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:21.984 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:10:21.984 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:10:21.984 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:10:21.984 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:21.984 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:10:21.984 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:21.984 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:10:21.984 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:21.984 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:21.984 rmmod nvme_tcp 00:10:21.984 rmmod nvme_fabrics 00:10:21.984 rmmod nvme_keyring 00:10:21.984 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:21.984 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:10:21.984 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:10:21.984 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 78301 ']' 00:10:21.984 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 78301 00:10:21.984 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 78301 ']' 00:10:21.984 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 78301 00:10:21.984 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:10:21.984 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:21.984 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78301 00:10:21.984 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:21.984 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:21.984 killing process with pid 78301 00:10:21.984 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78301' 00:10:21.984 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 78301 00:10:21.984 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 78301 00:10:22.244 [2024-11-27 06:12:41.578200] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:22.244 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:22.244 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:22.244 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:22.244 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:10:22.244 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:10:22.244 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:22.244 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:10:22.244 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:22.244 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:22.244 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:22.244 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:22.244 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:22.244 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:22.244 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:22.244 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:22.244 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:22.244 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:22.244 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:22.244 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:22.244 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:22.244 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:22.504 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:22.504 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:22.504 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.504 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:22.504 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.504 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:10:22.504 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:22.504 00:10:22.504 real 0m4.892s 00:10:22.504 user 0m18.140s 00:10:22.504 sys 0m1.059s 00:10:22.504 ************************************ 00:10:22.504 END TEST nvmf_host_management 00:10:22.504 ************************************ 00:10:22.504 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:22.504 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:22.504 06:12:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:22.504 06:12:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:22.504 06:12:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.504 06:12:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:22.504 ************************************ 00:10:22.504 START TEST nvmf_lvol 00:10:22.504 ************************************ 00:10:22.504 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:22.504 * Looking for test storage... 00:10:22.505 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:22.505 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:22.505 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:22.505 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:22.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.505 --rc genhtml_branch_coverage=1 00:10:22.505 --rc genhtml_function_coverage=1 00:10:22.505 --rc genhtml_legend=1 00:10:22.505 --rc geninfo_all_blocks=1 00:10:22.505 --rc geninfo_unexecuted_blocks=1 00:10:22.505 00:10:22.505 ' 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:22.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.505 --rc genhtml_branch_coverage=1 00:10:22.505 --rc genhtml_function_coverage=1 00:10:22.505 --rc genhtml_legend=1 00:10:22.505 --rc geninfo_all_blocks=1 00:10:22.505 --rc geninfo_unexecuted_blocks=1 00:10:22.505 00:10:22.505 ' 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:22.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.505 --rc genhtml_branch_coverage=1 00:10:22.505 --rc genhtml_function_coverage=1 00:10:22.505 --rc genhtml_legend=1 00:10:22.505 --rc geninfo_all_blocks=1 00:10:22.505 --rc geninfo_unexecuted_blocks=1 00:10:22.505 00:10:22.505 ' 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:22.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.505 --rc genhtml_branch_coverage=1 00:10:22.505 --rc genhtml_function_coverage=1 00:10:22.505 --rc genhtml_legend=1 00:10:22.505 --rc geninfo_all_blocks=1 00:10:22.505 --rc geninfo_unexecuted_blocks=1 00:10:22.505 00:10:22.505 ' 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:22.505 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:22.767 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:10:22.767 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:10:22.767 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:22.767 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:22.767 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:22.767 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:22.767 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:22.767 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:10:22.767 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.767 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.767 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.767 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.767 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:22.768 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:22.768 Cannot find device "nvmf_init_br" 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:22.768 Cannot find device "nvmf_init_br2" 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:22.768 Cannot find device "nvmf_tgt_br" 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:22.768 Cannot find device "nvmf_tgt_br2" 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:22.768 Cannot find device "nvmf_init_br" 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:22.768 Cannot find device "nvmf_init_br2" 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:22.768 Cannot find device "nvmf_tgt_br" 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:22.768 Cannot find device "nvmf_tgt_br2" 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:22.768 Cannot find device "nvmf_br" 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:22.768 Cannot find device "nvmf_init_if" 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:22.768 Cannot find device "nvmf_init_if2" 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:22.768 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:22.768 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:22.768 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:22.769 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:22.769 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:22.769 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:22.769 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:22.769 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:22.769 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:22.769 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:22.769 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:22.769 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:22.769 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:22.769 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:22.769 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:22.769 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:22.769 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:22.769 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:23.086 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:23.086 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:23.086 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:23.086 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:23.086 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:23.086 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:23.086 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:23.086 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:23.086 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:23.086 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:23.086 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:10:23.086 00:10:23.086 --- 10.0.0.3 ping statistics --- 00:10:23.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.086 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:10:23.086 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:23.086 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:23.086 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:10:23.086 00:10:23.086 --- 10.0.0.4 ping statistics --- 00:10:23.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.086 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:10:23.086 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:23.086 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:23.086 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:10:23.086 00:10:23.086 --- 10.0.0.1 ping statistics --- 00:10:23.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.086 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:10:23.086 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:23.086 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:23.086 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:10:23.086 00:10:23.086 --- 10.0.0.2 ping statistics --- 00:10:23.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.087 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:10:23.087 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:23.087 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:10:23.087 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:23.087 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:23.087 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:23.087 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:23.087 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:23.087 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:23.087 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:23.087 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:23.087 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:23.087 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:23.087 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:23.087 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=78667 00:10:23.087 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 78667 00:10:23.087 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 78667 ']' 00:10:23.087 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:23.087 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.087 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:23.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.087 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.087 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:23.087 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:23.087 [2024-11-27 06:12:42.385949] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:10:23.087 [2024-11-27 06:12:42.386002] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:23.087 [2024-11-27 06:12:42.501124] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:23.087 [2024-11-27 06:12:42.527285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:23.087 [2024-11-27 06:12:42.545576] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:23.087 [2024-11-27 06:12:42.545612] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:23.087 [2024-11-27 06:12:42.545618] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:23.087 [2024-11-27 06:12:42.545623] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:23.087 [2024-11-27 06:12:42.545627] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:23.087 [2024-11-27 06:12:42.546417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.087 [2024-11-27 06:12:42.546724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:23.087 [2024-11-27 06:12:42.546977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.026 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.026 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:10:24.026 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:24.026 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:24.026 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:24.026 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:24.026 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:24.026 [2024-11-27 06:12:43.483213] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:24.026 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:24.285 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:24.285 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:24.543 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:24.543 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:24.800 06:12:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:25.057 06:12:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=0eb1603c-fcb1-4c46-83ba-0dd34f53dfc9 00:10:25.057 06:12:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0eb1603c-fcb1-4c46-83ba-0dd34f53dfc9 lvol 20 00:10:25.315 06:12:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=7825b5ef-ca42-452a-b89d-108359920bb0 00:10:25.315 06:12:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:25.315 06:12:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7825b5ef-ca42-452a-b89d-108359920bb0 00:10:25.572 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:10:25.829 [2024-11-27 06:12:45.197417] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:25.829 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:26.087 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=78809 00:10:26.087 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:26.087 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:27.017 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 7825b5ef-ca42-452a-b89d-108359920bb0 MY_SNAPSHOT 00:10:27.275 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=be5ac46b-0fb7-4948-a7d6-f1d78f1a5e24 00:10:27.275 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 7825b5ef-ca42-452a-b89d-108359920bb0 30 00:10:27.533 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone be5ac46b-0fb7-4948-a7d6-f1d78f1a5e24 MY_CLONE 00:10:27.791 06:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=fea951cf-58c4-479c-9d7a-97eba6847a69 00:10:27.791 06:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate fea951cf-58c4-479c-9d7a-97eba6847a69 00:10:28.356 06:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 78809 00:10:36.463 Initializing NVMe Controllers 00:10:36.463 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:10:36.463 Controller IO queue size 128, less than required. 00:10:36.463 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:36.463 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:36.463 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:36.463 Initialization complete. Launching workers. 00:10:36.463 ======================================================== 00:10:36.463 Latency(us) 00:10:36.463 Device Information : IOPS MiB/s Average min max 00:10:36.463 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16425.60 64.16 7796.21 1579.51 37158.74 00:10:36.463 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15169.10 59.25 8439.33 3163.72 50249.62 00:10:36.463 ======================================================== 00:10:36.463 Total : 31594.70 123.42 8104.98 1579.51 50249.62 00:10:36.463 00:10:36.463 06:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:36.463 06:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 7825b5ef-ca42-452a-b89d-108359920bb0 00:10:36.721 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0eb1603c-fcb1-4c46-83ba-0dd34f53dfc9 00:10:36.979 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:36.979 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:36.979 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:36.979 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:36.979 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:10:36.979 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:36.979 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:10:36.979 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:36.979 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:36.979 rmmod nvme_tcp 00:10:36.979 rmmod nvme_fabrics 00:10:36.979 rmmod nvme_keyring 00:10:36.979 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:36.979 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:10:36.979 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:10:36.979 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 78667 ']' 00:10:36.979 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 78667 00:10:36.979 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 78667 ']' 00:10:36.979 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 78667 00:10:36.979 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:10:36.979 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:36.979 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78667 00:10:36.979 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:36.979 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:36.979 killing process with pid 78667 00:10:36.979 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78667' 00:10:36.979 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 78667 00:10:36.979 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 78667 00:10:37.237 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:37.237 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:37.237 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:37.237 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:10:37.237 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:37.237 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:10:37.237 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:10:37.237 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:37.237 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:37.237 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:37.237 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:37.237 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:37.237 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:37.237 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:37.237 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:37.237 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:37.237 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:37.237 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:37.237 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:37.237 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:37.237 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:37.237 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:37.237 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:37.237 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.237 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.237 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.496 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:10:37.496 00:10:37.496 real 0m14.885s 00:10:37.496 user 1m3.026s 00:10:37.496 sys 0m3.056s 00:10:37.496 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.496 ************************************ 00:10:37.496 END TEST nvmf_lvol 00:10:37.496 ************************************ 00:10:37.496 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:37.496 06:12:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:37.496 06:12:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:37.496 06:12:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.496 06:12:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:37.496 ************************************ 00:10:37.496 START TEST nvmf_lvs_grow 00:10:37.496 ************************************ 00:10:37.496 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:37.496 * Looking for test storage... 00:10:37.496 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:37.496 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:37.496 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:37.496 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:10:37.496 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:37.496 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.496 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.496 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.496 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.496 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.496 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.496 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.496 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.496 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.496 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.496 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.496 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:10:37.496 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:10:37.496 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.496 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:37.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.497 --rc genhtml_branch_coverage=1 00:10:37.497 --rc genhtml_function_coverage=1 00:10:37.497 --rc genhtml_legend=1 00:10:37.497 --rc geninfo_all_blocks=1 00:10:37.497 --rc geninfo_unexecuted_blocks=1 00:10:37.497 00:10:37.497 ' 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:37.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.497 --rc genhtml_branch_coverage=1 00:10:37.497 --rc genhtml_function_coverage=1 00:10:37.497 --rc genhtml_legend=1 00:10:37.497 --rc geninfo_all_blocks=1 00:10:37.497 --rc geninfo_unexecuted_blocks=1 00:10:37.497 00:10:37.497 ' 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:37.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.497 --rc genhtml_branch_coverage=1 00:10:37.497 --rc genhtml_function_coverage=1 00:10:37.497 --rc genhtml_legend=1 00:10:37.497 --rc geninfo_all_blocks=1 00:10:37.497 --rc geninfo_unexecuted_blocks=1 00:10:37.497 00:10:37.497 ' 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:37.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.497 --rc genhtml_branch_coverage=1 00:10:37.497 --rc genhtml_function_coverage=1 00:10:37.497 --rc genhtml_legend=1 00:10:37.497 --rc geninfo_all_blocks=1 00:10:37.497 --rc geninfo_unexecuted_blocks=1 00:10:37.497 00:10:37.497 ' 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:37.497 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:37.497 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:37.498 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:37.498 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:37.498 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:37.498 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:37.498 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:37.498 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:37.498 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:37.498 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:37.498 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:37.498 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:37.498 Cannot find device "nvmf_init_br" 00:10:37.498 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:10:37.498 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:37.498 Cannot find device "nvmf_init_br2" 00:10:37.498 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:10:37.498 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:37.498 Cannot find device "nvmf_tgt_br" 00:10:37.498 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:10:37.498 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:37.498 Cannot find device "nvmf_tgt_br2" 00:10:37.498 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:10:37.498 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:37.498 Cannot find device "nvmf_init_br" 00:10:37.498 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:10:37.498 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:37.498 Cannot find device "nvmf_init_br2" 00:10:37.498 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:10:37.498 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:37.498 Cannot find device "nvmf_tgt_br" 00:10:37.498 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:10:37.498 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:37.756 Cannot find device "nvmf_tgt_br2" 00:10:37.756 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:10:37.756 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:37.756 Cannot find device "nvmf_br" 00:10:37.756 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:10:37.756 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:37.756 Cannot find device "nvmf_init_if" 00:10:37.756 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:10:37.756 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:37.756 Cannot find device "nvmf_init_if2" 00:10:37.756 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:10:37.756 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:37.756 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:37.756 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:10:37.756 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:37.756 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:37.756 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:10:37.756 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:37.756 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:37.756 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:37.756 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:37.756 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:37.756 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:37.756 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:37.756 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:37.756 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:37.756 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:37.756 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:37.756 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:37.756 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:37.756 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:37.756 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:37.756 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:37.756 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:37.756 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:37.756 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:37.756 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:37.756 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:37.756 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:37.756 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:37.757 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:37.757 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:37.757 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:37.757 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:37.757 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:37.757 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:37.757 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:37.757 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:37.757 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:37.757 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:37.757 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:37.757 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:10:37.757 00:10:37.757 --- 10.0.0.3 ping statistics --- 00:10:37.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.757 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:10:37.757 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:37.757 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:37.757 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:10:37.757 00:10:37.757 --- 10.0.0.4 ping statistics --- 00:10:37.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.757 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:10:37.757 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:37.757 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:37.757 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:10:37.757 00:10:37.757 --- 10.0.0.1 ping statistics --- 00:10:37.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.757 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:10:37.757 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:37.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:37.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 00:10:37.757 00:10:37.757 --- 10.0.0.2 ping statistics --- 00:10:37.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.757 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:10:37.757 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:37.757 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:10:37.757 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:37.757 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:37.757 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:37.757 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:37.757 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:37.757 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:37.757 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:37.757 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:37.757 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:37.757 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:37.757 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:37.757 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=79220 00:10:37.757 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 79220 00:10:37.757 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 79220 ']' 00:10:37.757 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:37.757 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.757 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:37.757 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.757 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:37.757 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:37.757 [2024-11-27 06:12:57.277350] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:10:37.757 [2024-11-27 06:12:57.277408] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:38.015 [2024-11-27 06:12:57.391861] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:38.016 [2024-11-27 06:12:57.421784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.016 [2024-11-27 06:12:57.439162] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:38.016 [2024-11-27 06:12:57.439199] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:38.016 [2024-11-27 06:12:57.439205] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:38.016 [2024-11-27 06:12:57.439210] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:38.016 [2024-11-27 06:12:57.439214] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:38.016 [2024-11-27 06:12:57.439462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.658 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:38.658 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:10:38.658 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:38.658 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:38.658 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:38.658 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:38.658 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:38.916 [2024-11-27 06:12:58.371243] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:38.916 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:38.916 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:38.916 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.916 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:38.916 ************************************ 00:10:38.916 START TEST lvs_grow_clean 00:10:38.916 ************************************ 00:10:38.916 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:10:38.916 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:38.916 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:38.916 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:38.916 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:38.916 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:38.916 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:38.916 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:38.916 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:38.916 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:39.173 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:39.173 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:39.430 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f3060472-d724-4f6a-b409-d737747b11da 00:10:39.430 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3060472-d724-4f6a-b409-d737747b11da 00:10:39.430 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:39.689 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:39.689 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:39.689 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f3060472-d724-4f6a-b409-d737747b11da lvol 150 00:10:39.950 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=8a8ad300-0859-46b5-92a2-2e36e71e13f0 00:10:39.950 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:39.950 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:40.208 [2024-11-27 06:12:59.483017] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:40.208 [2024-11-27 06:12:59.483074] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:40.208 true 00:10:40.208 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3060472-d724-4f6a-b409-d737747b11da 00:10:40.208 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:40.208 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:40.208 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:40.466 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8a8ad300-0859-46b5-92a2-2e36e71e13f0 00:10:40.725 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:10:40.725 [2024-11-27 06:13:00.207408] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:40.725 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:40.982 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=79383 00:10:40.983 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:40.983 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 79383 /var/tmp/bdevperf.sock 00:10:40.983 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 79383 ']' 00:10:40.983 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:40.983 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:40.983 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:40.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:40.983 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:40.983 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:40.983 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:40.983 [2024-11-27 06:13:00.420965] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:10:40.983 [2024-11-27 06:13:00.421032] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79383 ] 00:10:41.240 [2024-11-27 06:13:00.534827] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:41.240 [2024-11-27 06:13:00.561565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.240 [2024-11-27 06:13:00.580491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.805 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:41.805 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:10:41.805 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:42.064 Nvme0n1 00:10:42.064 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:42.323 [ 00:10:42.323 { 00:10:42.323 "aliases": [ 00:10:42.323 "8a8ad300-0859-46b5-92a2-2e36e71e13f0" 00:10:42.323 ], 00:10:42.323 "assigned_rate_limits": { 00:10:42.323 "r_mbytes_per_sec": 0, 00:10:42.323 "rw_ios_per_sec": 0, 00:10:42.323 "rw_mbytes_per_sec": 0, 00:10:42.323 "w_mbytes_per_sec": 0 00:10:42.323 }, 00:10:42.323 "block_size": 4096, 00:10:42.323 "claimed": false, 00:10:42.323 "driver_specific": { 00:10:42.323 "mp_policy": "active_passive", 00:10:42.323 "nvme": [ 00:10:42.323 { 00:10:42.323 "ctrlr_data": { 00:10:42.323 "ana_reporting": false, 00:10:42.323 "cntlid": 1, 00:10:42.323 "firmware_revision": "25.01", 00:10:42.323 "model_number": "SPDK bdev Controller", 00:10:42.323 "multi_ctrlr": true, 00:10:42.323 "oacs": { 00:10:42.323 "firmware": 0, 00:10:42.323 "format": 0, 00:10:42.323 "ns_manage": 0, 00:10:42.323 "security": 0 00:10:42.323 }, 00:10:42.323 "serial_number": "SPDK0", 00:10:42.323 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:42.323 "vendor_id": "0x8086" 00:10:42.323 }, 00:10:42.323 "ns_data": { 00:10:42.323 "can_share": true, 00:10:42.323 "id": 1 00:10:42.323 }, 00:10:42.323 "trid": { 00:10:42.323 "adrfam": "IPv4", 00:10:42.323 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:42.323 "traddr": "10.0.0.3", 00:10:42.323 "trsvcid": "4420", 00:10:42.323 "trtype": "TCP" 00:10:42.323 }, 00:10:42.323 "vs": { 00:10:42.323 "nvme_version": "1.3" 00:10:42.323 } 00:10:42.323 } 00:10:42.323 ] 00:10:42.323 }, 00:10:42.323 "memory_domains": [ 00:10:42.323 { 00:10:42.323 "dma_device_id": "system", 00:10:42.323 "dma_device_type": 1 00:10:42.323 } 00:10:42.323 ], 00:10:42.323 "name": "Nvme0n1", 00:10:42.323 "num_blocks": 38912, 00:10:42.323 "numa_id": -1, 00:10:42.323 "product_name": "NVMe disk", 00:10:42.323 "supported_io_types": { 00:10:42.323 "abort": true, 00:10:42.323 "compare": true, 00:10:42.323 "compare_and_write": true, 00:10:42.323 "copy": true, 00:10:42.323 "flush": true, 00:10:42.323 "get_zone_info": false, 00:10:42.323 "nvme_admin": true, 00:10:42.323 "nvme_io": true, 00:10:42.323 "nvme_io_md": false, 00:10:42.323 "nvme_iov_md": false, 00:10:42.323 "read": true, 00:10:42.323 "reset": true, 00:10:42.323 "seek_data": false, 00:10:42.323 "seek_hole": false, 00:10:42.323 "unmap": true, 00:10:42.323 "write": true, 00:10:42.323 "write_zeroes": true, 00:10:42.323 "zcopy": false, 00:10:42.323 "zone_append": false, 00:10:42.323 "zone_management": false 00:10:42.323 }, 00:10:42.323 "uuid": "8a8ad300-0859-46b5-92a2-2e36e71e13f0", 00:10:42.323 "zoned": false 00:10:42.323 } 00:10:42.323 ] 00:10:42.323 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=79425 00:10:42.323 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:42.323 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:42.323 Running I/O for 10 seconds... 00:10:43.257 Latency(us) 00:10:43.257 [2024-11-27T06:13:02.787Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:43.257 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:43.257 Nvme0n1 : 1.00 12158.00 47.49 0.00 0.00 0.00 0.00 0.00 00:10:43.257 [2024-11-27T06:13:02.787Z] =================================================================================================================== 00:10:43.257 [2024-11-27T06:13:02.787Z] Total : 12158.00 47.49 0.00 0.00 0.00 0.00 0.00 00:10:43.257 00:10:44.191 06:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f3060472-d724-4f6a-b409-d737747b11da 00:10:44.448 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:44.448 Nvme0n1 : 2.00 12548.50 49.02 0.00 0.00 0.00 0.00 0.00 00:10:44.448 [2024-11-27T06:13:03.978Z] =================================================================================================================== 00:10:44.448 [2024-11-27T06:13:03.978Z] Total : 12548.50 49.02 0.00 0.00 0.00 0.00 0.00 00:10:44.448 00:10:44.448 true 00:10:44.448 06:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3060472-d724-4f6a-b409-d737747b11da 00:10:44.448 06:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:44.706 06:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:44.706 06:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:44.706 06:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 79425 00:10:45.272 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:45.272 Nvme0n1 : 3.00 12595.33 49.20 0.00 0.00 0.00 0.00 0.00 00:10:45.273 [2024-11-27T06:13:04.803Z] =================================================================================================================== 00:10:45.273 [2024-11-27T06:13:04.803Z] Total : 12595.33 49.20 0.00 0.00 0.00 0.00 0.00 00:10:45.273 00:10:46.664 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:46.664 Nvme0n1 : 4.00 12590.25 49.18 0.00 0.00 0.00 0.00 0.00 00:10:46.664 [2024-11-27T06:13:06.194Z] =================================================================================================================== 00:10:46.664 [2024-11-27T06:13:06.194Z] Total : 12590.25 49.18 0.00 0.00 0.00 0.00 0.00 00:10:46.664 00:10:47.597 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:47.597 Nvme0n1 : 5.00 10930.00 42.70 0.00 0.00 0.00 0.00 0.00 00:10:47.597 [2024-11-27T06:13:07.127Z] =================================================================================================================== 00:10:47.597 [2024-11-27T06:13:07.127Z] Total : 10930.00 42.70 0.00 0.00 0.00 0.00 0.00 00:10:47.597 00:10:48.529 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:48.529 Nvme0n1 : 6.00 11105.50 43.38 0.00 0.00 0.00 0.00 0.00 00:10:48.529 [2024-11-27T06:13:08.059Z] =================================================================================================================== 00:10:48.529 [2024-11-27T06:13:08.059Z] Total : 11105.50 43.38 0.00 0.00 0.00 0.00 0.00 00:10:48.529 00:10:49.461 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:49.461 Nvme0n1 : 7.00 11187.71 43.70 0.00 0.00 0.00 0.00 0.00 00:10:49.461 [2024-11-27T06:13:08.991Z] =================================================================================================================== 00:10:49.461 [2024-11-27T06:13:08.991Z] Total : 11187.71 43.70 0.00 0.00 0.00 0.00 0.00 00:10:49.461 00:10:50.395 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:50.395 Nvme0n1 : 8.00 11361.75 44.38 0.00 0.00 0.00 0.00 0.00 00:10:50.395 [2024-11-27T06:13:09.925Z] =================================================================================================================== 00:10:50.395 [2024-11-27T06:13:09.925Z] Total : 11361.75 44.38 0.00 0.00 0.00 0.00 0.00 00:10:50.395 00:10:51.327 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:51.327 Nvme0n1 : 9.00 11486.44 44.87 0.00 0.00 0.00 0.00 0.00 00:10:51.327 [2024-11-27T06:13:10.857Z] =================================================================================================================== 00:10:51.327 [2024-11-27T06:13:10.857Z] Total : 11486.44 44.87 0.00 0.00 0.00 0.00 0.00 00:10:51.327 00:10:52.259 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:52.259 Nvme0n1 : 10.00 11568.60 45.19 0.00 0.00 0.00 0.00 0.00 00:10:52.259 [2024-11-27T06:13:11.789Z] =================================================================================================================== 00:10:52.259 [2024-11-27T06:13:11.789Z] Total : 11568.60 45.19 0.00 0.00 0.00 0.00 0.00 00:10:52.259 00:10:52.259 00:10:52.259 Latency(us) 00:10:52.259 [2024-11-27T06:13:11.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:52.259 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:52.259 Nvme0n1 : 10.00 11578.24 45.23 0.00 0.00 11051.06 4234.63 674315.03 00:10:52.259 [2024-11-27T06:13:11.789Z] =================================================================================================================== 00:10:52.259 [2024-11-27T06:13:11.789Z] Total : 11578.24 45.23 0.00 0.00 11051.06 4234.63 674315.03 00:10:52.259 { 00:10:52.259 "results": [ 00:10:52.259 { 00:10:52.259 "job": "Nvme0n1", 00:10:52.259 "core_mask": "0x2", 00:10:52.259 "workload": "randwrite", 00:10:52.259 "status": "finished", 00:10:52.259 "queue_depth": 128, 00:10:52.259 "io_size": 4096, 00:10:52.259 "runtime": 10.002732, 00:10:52.259 "iops": 11578.23682569922, 00:10:52.259 "mibps": 45.227487600387576, 00:10:52.259 "io_failed": 0, 00:10:52.259 "io_timeout": 0, 00:10:52.259 "avg_latency_us": 11051.05556596718, 00:10:52.259 "min_latency_us": 4234.633846153846, 00:10:52.259 "max_latency_us": 674315.0276923077 00:10:52.259 } 00:10:52.259 ], 00:10:52.259 "core_count": 1 00:10:52.259 } 00:10:52.259 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 79383 00:10:52.259 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 79383 ']' 00:10:52.259 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 79383 00:10:52.259 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:10:52.259 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:52.259 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79383 00:10:52.517 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:52.517 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:52.517 killing process with pid 79383 00:10:52.518 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79383' 00:10:52.518 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 79383 00:10:52.518 Received shutdown signal, test time was about 10.000000 seconds 00:10:52.518 00:10:52.518 Latency(us) 00:10:52.518 [2024-11-27T06:13:12.048Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:52.518 [2024-11-27T06:13:12.048Z] =================================================================================================================== 00:10:52.518 [2024-11-27T06:13:12.048Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:52.518 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 79383 00:10:52.518 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:52.775 06:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:53.034 06:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:53.034 06:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3060472-d724-4f6a-b409-d737747b11da 00:10:53.034 06:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:53.034 06:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:53.034 06:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:53.292 [2024-11-27 06:13:12.741761] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:53.292 06:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3060472-d724-4f6a-b409-d737747b11da 00:10:53.292 06:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:10:53.292 06:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3060472-d724-4f6a-b409-d737747b11da 00:10:53.292 06:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:53.292 06:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:53.292 06:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:53.292 06:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:53.292 06:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:53.292 06:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:53.292 06:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:53.292 06:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:53.292 06:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3060472-d724-4f6a-b409-d737747b11da 00:10:53.550 2024/11/27 06:13:12 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:f3060472-d724-4f6a-b409-d737747b11da], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:10:53.550 request: 00:10:53.550 { 00:10:53.550 "method": "bdev_lvol_get_lvstores", 00:10:53.550 "params": { 00:10:53.550 "uuid": "f3060472-d724-4f6a-b409-d737747b11da" 00:10:53.550 } 00:10:53.550 } 00:10:53.550 Got JSON-RPC error response 00:10:53.550 GoRPCClient: error on JSON-RPC call 00:10:53.550 06:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:10:53.550 06:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:53.550 06:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:53.550 06:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:53.550 06:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:53.807 aio_bdev 00:10:53.808 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8a8ad300-0859-46b5-92a2-2e36e71e13f0 00:10:53.808 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=8a8ad300-0859-46b5-92a2-2e36e71e13f0 00:10:53.808 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:53.808 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:10:53.808 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:53.808 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:53.808 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:54.065 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8a8ad300-0859-46b5-92a2-2e36e71e13f0 -t 2000 00:10:54.322 [ 00:10:54.322 { 00:10:54.322 "aliases": [ 00:10:54.322 "lvs/lvol" 00:10:54.322 ], 00:10:54.322 "assigned_rate_limits": { 00:10:54.322 "r_mbytes_per_sec": 0, 00:10:54.322 "rw_ios_per_sec": 0, 00:10:54.322 "rw_mbytes_per_sec": 0, 00:10:54.322 "w_mbytes_per_sec": 0 00:10:54.322 }, 00:10:54.323 "block_size": 4096, 00:10:54.323 "claimed": false, 00:10:54.323 "driver_specific": { 00:10:54.323 "lvol": { 00:10:54.323 "base_bdev": "aio_bdev", 00:10:54.323 "clone": false, 00:10:54.323 "esnap_clone": false, 00:10:54.323 "lvol_store_uuid": "f3060472-d724-4f6a-b409-d737747b11da", 00:10:54.323 "num_allocated_clusters": 38, 00:10:54.323 "snapshot": false, 00:10:54.323 "thin_provision": false 00:10:54.323 } 00:10:54.323 }, 00:10:54.323 "name": "8a8ad300-0859-46b5-92a2-2e36e71e13f0", 00:10:54.323 "num_blocks": 38912, 00:10:54.323 "product_name": "Logical Volume", 00:10:54.323 "supported_io_types": { 00:10:54.323 "abort": false, 00:10:54.323 "compare": false, 00:10:54.323 "compare_and_write": false, 00:10:54.323 "copy": false, 00:10:54.323 "flush": false, 00:10:54.323 "get_zone_info": false, 00:10:54.323 "nvme_admin": false, 00:10:54.323 "nvme_io": false, 00:10:54.323 "nvme_io_md": false, 00:10:54.323 "nvme_iov_md": false, 00:10:54.323 "read": true, 00:10:54.323 "reset": true, 00:10:54.323 "seek_data": true, 00:10:54.323 "seek_hole": true, 00:10:54.323 "unmap": true, 00:10:54.323 "write": true, 00:10:54.323 "write_zeroes": true, 00:10:54.323 "zcopy": false, 00:10:54.323 "zone_append": false, 00:10:54.323 "zone_management": false 00:10:54.323 }, 00:10:54.323 "uuid": "8a8ad300-0859-46b5-92a2-2e36e71e13f0", 00:10:54.323 "zoned": false 00:10:54.323 } 00:10:54.323 ] 00:10:54.323 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:10:54.323 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3060472-d724-4f6a-b409-d737747b11da 00:10:54.323 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:54.580 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:54.580 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3060472-d724-4f6a-b409-d737747b11da 00:10:54.580 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:54.838 06:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:54.838 06:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 8a8ad300-0859-46b5-92a2-2e36e71e13f0 00:10:54.838 06:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f3060472-d724-4f6a-b409-d737747b11da 00:10:55.403 06:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:55.403 06:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:55.968 00:10:55.968 real 0m16.817s 00:10:55.968 user 0m16.111s 00:10:55.968 sys 0m1.657s 00:10:55.968 ************************************ 00:10:55.968 END TEST lvs_grow_clean 00:10:55.968 ************************************ 00:10:55.968 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.968 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:55.968 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:55.968 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:55.968 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.968 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:55.968 ************************************ 00:10:55.968 START TEST lvs_grow_dirty 00:10:55.968 ************************************ 00:10:55.968 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:10:55.968 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:55.968 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:55.968 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:55.968 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:55.968 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:55.968 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:55.968 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:55.968 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:55.968 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:56.223 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:56.223 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:56.223 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a1da7136-b70f-40b3-b2b8-bd220dd787e6 00:10:56.223 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1da7136-b70f-40b3-b2b8-bd220dd787e6 00:10:56.223 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:56.480 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:56.480 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:56.480 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a1da7136-b70f-40b3-b2b8-bd220dd787e6 lvol 150 00:10:56.738 06:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=8793bd5d-9b75-4a09-a354-d154abd17526 00:10:56.738 06:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:56.738 06:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:56.996 [2024-11-27 06:13:16.337168] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:56.996 [2024-11-27 06:13:16.337229] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:56.996 true 00:10:56.996 06:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1da7136-b70f-40b3-b2b8-bd220dd787e6 00:10:56.996 06:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:57.253 06:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:57.253 06:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:57.253 06:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8793bd5d-9b75-4a09-a354-d154abd17526 00:10:57.508 06:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:10:57.766 [2024-11-27 06:13:17.178091] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:57.766 06:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:58.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:58.023 06:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=79811 00:10:58.023 06:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:58.023 06:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:58.023 06:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 79811 /var/tmp/bdevperf.sock 00:10:58.023 06:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 79811 ']' 00:10:58.023 06:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:58.023 06:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:58.023 06:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:58.023 06:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:58.023 06:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:58.023 [2024-11-27 06:13:17.447049] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:10:58.023 [2024-11-27 06:13:17.447115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79811 ] 00:10:58.279 [2024-11-27 06:13:17.561226] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:58.279 [2024-11-27 06:13:17.587747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.279 [2024-11-27 06:13:17.607803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:58.843 06:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:58.844 06:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:58.844 06:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:59.100 Nvme0n1 00:10:59.100 06:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:59.358 [ 00:10:59.358 { 00:10:59.358 "aliases": [ 00:10:59.358 "8793bd5d-9b75-4a09-a354-d154abd17526" 00:10:59.358 ], 00:10:59.358 "assigned_rate_limits": { 00:10:59.358 "r_mbytes_per_sec": 0, 00:10:59.358 "rw_ios_per_sec": 0, 00:10:59.358 "rw_mbytes_per_sec": 0, 00:10:59.358 "w_mbytes_per_sec": 0 00:10:59.358 }, 00:10:59.358 "block_size": 4096, 00:10:59.358 "claimed": false, 00:10:59.358 "driver_specific": { 00:10:59.358 "mp_policy": "active_passive", 00:10:59.358 "nvme": [ 00:10:59.358 { 00:10:59.358 "ctrlr_data": { 00:10:59.358 "ana_reporting": false, 00:10:59.358 "cntlid": 1, 00:10:59.358 "firmware_revision": "25.01", 00:10:59.358 "model_number": "SPDK bdev Controller", 00:10:59.358 "multi_ctrlr": true, 00:10:59.358 "oacs": { 00:10:59.358 "firmware": 0, 00:10:59.358 "format": 0, 00:10:59.358 "ns_manage": 0, 00:10:59.358 "security": 0 00:10:59.358 }, 00:10:59.358 "serial_number": "SPDK0", 00:10:59.358 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:59.358 "vendor_id": "0x8086" 00:10:59.358 }, 00:10:59.358 "ns_data": { 00:10:59.358 "can_share": true, 00:10:59.358 "id": 1 00:10:59.358 }, 00:10:59.358 "trid": { 00:10:59.358 "adrfam": "IPv4", 00:10:59.358 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:59.358 "traddr": "10.0.0.3", 00:10:59.358 "trsvcid": "4420", 00:10:59.358 "trtype": "TCP" 00:10:59.358 }, 00:10:59.358 "vs": { 00:10:59.358 "nvme_version": "1.3" 00:10:59.358 } 00:10:59.358 } 00:10:59.358 ] 00:10:59.358 }, 00:10:59.358 "memory_domains": [ 00:10:59.358 { 00:10:59.358 "dma_device_id": "system", 00:10:59.358 "dma_device_type": 1 00:10:59.358 } 00:10:59.358 ], 00:10:59.358 "name": "Nvme0n1", 00:10:59.358 "num_blocks": 38912, 00:10:59.358 "numa_id": -1, 00:10:59.358 "product_name": "NVMe disk", 00:10:59.358 "supported_io_types": { 00:10:59.358 "abort": true, 00:10:59.358 "compare": true, 00:10:59.358 "compare_and_write": true, 00:10:59.358 "copy": true, 00:10:59.358 "flush": true, 00:10:59.358 "get_zone_info": false, 00:10:59.358 "nvme_admin": true, 00:10:59.358 "nvme_io": true, 00:10:59.358 "nvme_io_md": false, 00:10:59.358 "nvme_iov_md": false, 00:10:59.358 "read": true, 00:10:59.358 "reset": true, 00:10:59.358 "seek_data": false, 00:10:59.358 "seek_hole": false, 00:10:59.358 "unmap": true, 00:10:59.358 "write": true, 00:10:59.358 "write_zeroes": true, 00:10:59.358 "zcopy": false, 00:10:59.358 "zone_append": false, 00:10:59.358 "zone_management": false 00:10:59.358 }, 00:10:59.358 "uuid": "8793bd5d-9b75-4a09-a354-d154abd17526", 00:10:59.358 "zoned": false 00:10:59.358 } 00:10:59.358 ] 00:10:59.358 06:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=79864 00:10:59.358 06:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:59.358 06:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:59.617 Running I/O for 10 seconds... 00:11:00.547 Latency(us) 00:11:00.547 [2024-11-27T06:13:20.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:00.547 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:00.547 Nvme0n1 : 1.00 7081.00 27.66 0.00 0.00 0.00 0.00 0.00 00:11:00.547 [2024-11-27T06:13:20.077Z] =================================================================================================================== 00:11:00.547 [2024-11-27T06:13:20.077Z] Total : 7081.00 27.66 0.00 0.00 0.00 0.00 0.00 00:11:00.547 00:11:01.478 06:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a1da7136-b70f-40b3-b2b8-bd220dd787e6 00:11:01.478 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:01.478 Nvme0n1 : 2.00 5542.50 21.65 0.00 0.00 0.00 0.00 0.00 00:11:01.478 [2024-11-27T06:13:21.008Z] =================================================================================================================== 00:11:01.478 [2024-11-27T06:13:21.008Z] Total : 5542.50 21.65 0.00 0.00 0.00 0.00 0.00 00:11:01.478 00:11:01.736 true 00:11:01.736 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1da7136-b70f-40b3-b2b8-bd220dd787e6 00:11:01.736 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:01.993 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:01.993 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:01.993 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 79864 00:11:02.559 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:02.559 Nvme0n1 : 3.00 7172.33 28.02 0.00 0.00 0.00 0.00 0.00 00:11:02.559 [2024-11-27T06:13:22.089Z] =================================================================================================================== 00:11:02.559 [2024-11-27T06:13:22.089Z] Total : 7172.33 28.02 0.00 0.00 0.00 0.00 0.00 00:11:02.559 00:11:03.489 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:03.489 Nvme0n1 : 4.00 8065.00 31.50 0.00 0.00 0.00 0.00 0.00 00:11:03.489 [2024-11-27T06:13:23.019Z] =================================================================================================================== 00:11:03.489 [2024-11-27T06:13:23.019Z] Total : 8065.00 31.50 0.00 0.00 0.00 0.00 0.00 00:11:03.489 00:11:04.422 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:04.422 Nvme0n1 : 5.00 7716.60 30.14 0.00 0.00 0.00 0.00 0.00 00:11:04.422 [2024-11-27T06:13:23.952Z] =================================================================================================================== 00:11:04.422 [2024-11-27T06:13:23.952Z] Total : 7716.60 30.14 0.00 0.00 0.00 0.00 0.00 00:11:04.422 00:11:05.793 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:05.794 Nvme0n1 : 6.00 8173.17 31.93 0.00 0.00 0.00 0.00 0.00 00:11:05.794 [2024-11-27T06:13:25.324Z] =================================================================================================================== 00:11:05.794 [2024-11-27T06:13:25.324Z] Total : 8173.17 31.93 0.00 0.00 0.00 0.00 0.00 00:11:05.794 00:11:06.727 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:06.727 Nvme0n1 : 7.00 8487.29 33.15 0.00 0.00 0.00 0.00 0.00 00:11:06.727 [2024-11-27T06:13:26.257Z] =================================================================================================================== 00:11:06.727 [2024-11-27T06:13:26.257Z] Total : 8487.29 33.15 0.00 0.00 0.00 0.00 0.00 00:11:06.727 00:11:07.661 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:07.661 Nvme0n1 : 8.00 8711.75 34.03 0.00 0.00 0.00 0.00 0.00 00:11:07.661 [2024-11-27T06:13:27.191Z] =================================================================================================================== 00:11:07.661 [2024-11-27T06:13:27.191Z] Total : 8711.75 34.03 0.00 0.00 0.00 0.00 0.00 00:11:07.661 00:11:08.641 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:08.641 Nvme0n1 : 9.00 8881.56 34.69 0.00 0.00 0.00 0.00 0.00 00:11:08.641 [2024-11-27T06:13:28.171Z] =================================================================================================================== 00:11:08.641 [2024-11-27T06:13:28.171Z] Total : 8881.56 34.69 0.00 0.00 0.00 0.00 0.00 00:11:08.641 00:11:09.611 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:09.611 Nvme0n1 : 10.00 8838.10 34.52 0.00 0.00 0.00 0.00 0.00 00:11:09.611 [2024-11-27T06:13:29.141Z] =================================================================================================================== 00:11:09.611 [2024-11-27T06:13:29.141Z] Total : 8838.10 34.52 0.00 0.00 0.00 0.00 0.00 00:11:09.611 00:11:09.611 00:11:09.611 Latency(us) 00:11:09.611 [2024-11-27T06:13:29.141Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:09.611 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:09.611 Nvme0n1 : 10.01 8842.28 34.54 0.00 0.00 14467.59 3629.69 822728.86 00:11:09.611 [2024-11-27T06:13:29.141Z] =================================================================================================================== 00:11:09.611 [2024-11-27T06:13:29.141Z] Total : 8842.28 34.54 0.00 0.00 14467.59 3629.69 822728.86 00:11:09.611 { 00:11:09.611 "results": [ 00:11:09.611 { 00:11:09.611 "job": "Nvme0n1", 00:11:09.611 "core_mask": "0x2", 00:11:09.611 "workload": "randwrite", 00:11:09.611 "status": "finished", 00:11:09.611 "queue_depth": 128, 00:11:09.611 "io_size": 4096, 00:11:09.611 "runtime": 10.009754, 00:11:09.611 "iops": 8842.275244726294, 00:11:09.611 "mibps": 34.540137674712085, 00:11:09.611 "io_failed": 0, 00:11:09.611 "io_timeout": 0, 00:11:09.611 "avg_latency_us": 14467.585597918336, 00:11:09.611 "min_latency_us": 3629.686153846154, 00:11:09.611 "max_latency_us": 822728.8615384615 00:11:09.611 } 00:11:09.611 ], 00:11:09.611 "core_count": 1 00:11:09.611 } 00:11:09.611 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 79811 00:11:09.611 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 79811 ']' 00:11:09.611 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 79811 00:11:09.611 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:11:09.611 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:09.611 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79811 00:11:09.611 killing process with pid 79811 00:11:09.611 Received shutdown signal, test time was about 10.000000 seconds 00:11:09.611 00:11:09.611 Latency(us) 00:11:09.611 [2024-11-27T06:13:29.141Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:09.611 [2024-11-27T06:13:29.141Z] =================================================================================================================== 00:11:09.611 [2024-11-27T06:13:29.141Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:09.611 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:09.611 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:09.611 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79811' 00:11:09.611 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 79811 00:11:09.611 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 79811 00:11:09.611 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:09.869 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:10.128 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1da7136-b70f-40b3-b2b8-bd220dd787e6 00:11:10.128 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:10.386 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:10.386 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:11:10.386 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 79220 00:11:10.387 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 79220 00:11:10.387 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 79220 Killed "${NVMF_APP[@]}" "$@" 00:11:10.387 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:11:10.387 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:11:10.387 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:10.387 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:10.387 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:10.387 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=80021 00:11:10.387 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:10.387 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 80021 00:11:10.387 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 80021 ']' 00:11:10.387 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.387 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:10.387 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.387 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:10.387 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:10.387 [2024-11-27 06:13:29.789152] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:11:10.387 [2024-11-27 06:13:29.789227] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.387 [2024-11-27 06:13:29.905781] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:10.645 [2024-11-27 06:13:29.936497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.645 [2024-11-27 06:13:29.955111] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:10.645 [2024-11-27 06:13:29.955146] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:10.645 [2024-11-27 06:13:29.955153] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:10.645 [2024-11-27 06:13:29.955159] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:10.645 [2024-11-27 06:13:29.955164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:10.645 [2024-11-27 06:13:29.955412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.211 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:11.211 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:11:11.211 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:11.211 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:11.211 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:11.211 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:11.211 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:11.470 [2024-11-27 06:13:30.897172] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:11.470 [2024-11-27 06:13:30.898327] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:11.470 [2024-11-27 06:13:30.898649] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:11.470 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:11:11.470 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 8793bd5d-9b75-4a09-a354-d154abd17526 00:11:11.470 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=8793bd5d-9b75-4a09-a354-d154abd17526 00:11:11.470 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:11.470 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:11:11.470 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:11.470 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:11.470 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:11.727 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8793bd5d-9b75-4a09-a354-d154abd17526 -t 2000 00:11:11.985 [ 00:11:11.985 { 00:11:11.985 "aliases": [ 00:11:11.985 "lvs/lvol" 00:11:11.985 ], 00:11:11.985 "assigned_rate_limits": { 00:11:11.985 "r_mbytes_per_sec": 0, 00:11:11.985 "rw_ios_per_sec": 0, 00:11:11.985 "rw_mbytes_per_sec": 0, 00:11:11.985 "w_mbytes_per_sec": 0 00:11:11.985 }, 00:11:11.985 "block_size": 4096, 00:11:11.985 "claimed": false, 00:11:11.985 "driver_specific": { 00:11:11.985 "lvol": { 00:11:11.985 "base_bdev": "aio_bdev", 00:11:11.985 "clone": false, 00:11:11.985 "esnap_clone": false, 00:11:11.985 "lvol_store_uuid": "a1da7136-b70f-40b3-b2b8-bd220dd787e6", 00:11:11.985 "num_allocated_clusters": 38, 00:11:11.985 "snapshot": false, 00:11:11.985 "thin_provision": false 00:11:11.985 } 00:11:11.985 }, 00:11:11.985 "name": "8793bd5d-9b75-4a09-a354-d154abd17526", 00:11:11.985 "num_blocks": 38912, 00:11:11.985 "product_name": "Logical Volume", 00:11:11.985 "supported_io_types": { 00:11:11.985 "abort": false, 00:11:11.986 "compare": false, 00:11:11.986 "compare_and_write": false, 00:11:11.986 "copy": false, 00:11:11.986 "flush": false, 00:11:11.986 "get_zone_info": false, 00:11:11.986 "nvme_admin": false, 00:11:11.986 "nvme_io": false, 00:11:11.986 "nvme_io_md": false, 00:11:11.986 "nvme_iov_md": false, 00:11:11.986 "read": true, 00:11:11.986 "reset": true, 00:11:11.986 "seek_data": true, 00:11:11.986 "seek_hole": true, 00:11:11.986 "unmap": true, 00:11:11.986 "write": true, 00:11:11.986 "write_zeroes": true, 00:11:11.986 "zcopy": false, 00:11:11.986 "zone_append": false, 00:11:11.986 "zone_management": false 00:11:11.986 }, 00:11:11.986 "uuid": "8793bd5d-9b75-4a09-a354-d154abd17526", 00:11:11.986 "zoned": false 00:11:11.986 } 00:11:11.986 ] 00:11:11.986 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:11:11.986 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1da7136-b70f-40b3-b2b8-bd220dd787e6 00:11:11.986 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:11:12.243 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:11:12.243 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1da7136-b70f-40b3-b2b8-bd220dd787e6 00:11:12.243 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:11:12.501 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:11:12.501 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:12.501 [2024-11-27 06:13:32.018949] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:12.758 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1da7136-b70f-40b3-b2b8-bd220dd787e6 00:11:12.758 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:11:12.758 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1da7136-b70f-40b3-b2b8-bd220dd787e6 00:11:12.758 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:12.758 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:12.758 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:12.759 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:12.759 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:12.759 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:12.759 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:12.759 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:12.759 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1da7136-b70f-40b3-b2b8-bd220dd787e6 00:11:12.759 2024/11/27 06:13:32 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:a1da7136-b70f-40b3-b2b8-bd220dd787e6], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:11:12.759 request: 00:11:12.759 { 00:11:12.759 "method": "bdev_lvol_get_lvstores", 00:11:12.759 "params": { 00:11:12.759 "uuid": "a1da7136-b70f-40b3-b2b8-bd220dd787e6" 00:11:12.759 } 00:11:12.759 } 00:11:12.759 Got JSON-RPC error response 00:11:12.759 GoRPCClient: error on JSON-RPC call 00:11:13.016 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:11:13.016 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:13.016 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:13.016 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:13.016 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:13.016 aio_bdev 00:11:13.016 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8793bd5d-9b75-4a09-a354-d154abd17526 00:11:13.016 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=8793bd5d-9b75-4a09-a354-d154abd17526 00:11:13.016 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:13.016 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:11:13.016 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:13.016 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:13.016 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:13.273 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8793bd5d-9b75-4a09-a354-d154abd17526 -t 2000 00:11:13.538 [ 00:11:13.538 { 00:11:13.538 "aliases": [ 00:11:13.538 "lvs/lvol" 00:11:13.538 ], 00:11:13.538 "assigned_rate_limits": { 00:11:13.538 "r_mbytes_per_sec": 0, 00:11:13.538 "rw_ios_per_sec": 0, 00:11:13.538 "rw_mbytes_per_sec": 0, 00:11:13.538 "w_mbytes_per_sec": 0 00:11:13.538 }, 00:11:13.538 "block_size": 4096, 00:11:13.538 "claimed": false, 00:11:13.538 "driver_specific": { 00:11:13.538 "lvol": { 00:11:13.538 "base_bdev": "aio_bdev", 00:11:13.538 "clone": false, 00:11:13.538 "esnap_clone": false, 00:11:13.538 "lvol_store_uuid": "a1da7136-b70f-40b3-b2b8-bd220dd787e6", 00:11:13.538 "num_allocated_clusters": 38, 00:11:13.538 "snapshot": false, 00:11:13.538 "thin_provision": false 00:11:13.538 } 00:11:13.538 }, 00:11:13.538 "name": "8793bd5d-9b75-4a09-a354-d154abd17526", 00:11:13.538 "num_blocks": 38912, 00:11:13.538 "product_name": "Logical Volume", 00:11:13.538 "supported_io_types": { 00:11:13.538 "abort": false, 00:11:13.538 "compare": false, 00:11:13.538 "compare_and_write": false, 00:11:13.538 "copy": false, 00:11:13.538 "flush": false, 00:11:13.538 "get_zone_info": false, 00:11:13.538 "nvme_admin": false, 00:11:13.538 "nvme_io": false, 00:11:13.538 "nvme_io_md": false, 00:11:13.538 "nvme_iov_md": false, 00:11:13.538 "read": true, 00:11:13.538 "reset": true, 00:11:13.538 "seek_data": true, 00:11:13.538 "seek_hole": true, 00:11:13.538 "unmap": true, 00:11:13.538 "write": true, 00:11:13.538 "write_zeroes": true, 00:11:13.538 "zcopy": false, 00:11:13.538 "zone_append": false, 00:11:13.538 "zone_management": false 00:11:13.538 }, 00:11:13.538 "uuid": "8793bd5d-9b75-4a09-a354-d154abd17526", 00:11:13.538 "zoned": false 00:11:13.538 } 00:11:13.538 ] 00:11:13.538 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:11:13.538 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1da7136-b70f-40b3-b2b8-bd220dd787e6 00:11:13.539 06:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:13.795 06:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:13.795 06:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1da7136-b70f-40b3-b2b8-bd220dd787e6 00:11:13.795 06:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:14.052 06:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:14.052 06:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 8793bd5d-9b75-4a09-a354-d154abd17526 00:11:14.052 06:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a1da7136-b70f-40b3-b2b8-bd220dd787e6 00:11:14.310 06:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:14.568 06:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:15.133 ************************************ 00:11:15.133 END TEST lvs_grow_dirty 00:11:15.133 ************************************ 00:11:15.133 00:11:15.133 real 0m19.116s 00:11:15.133 user 0m40.951s 00:11:15.133 sys 0m5.038s 00:11:15.133 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.133 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:15.133 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:15.133 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:11:15.133 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:11:15.133 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:11:15.133 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:15.133 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:11:15.133 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:11:15.133 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:11:15.133 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:15.133 nvmf_trace.0 00:11:15.133 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:11:15.133 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:15.133 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:15.133 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:11:16.065 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:16.065 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:11:16.065 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:16.066 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:16.066 rmmod nvme_tcp 00:11:16.066 rmmod nvme_fabrics 00:11:16.066 rmmod nvme_keyring 00:11:16.066 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:16.066 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:11:16.066 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:11:16.066 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 80021 ']' 00:11:16.066 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 80021 00:11:16.066 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 80021 ']' 00:11:16.066 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 80021 00:11:16.066 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:11:16.066 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:16.066 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80021 00:11:16.066 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:16.066 killing process with pid 80021 00:11:16.066 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:16.066 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80021' 00:11:16.066 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 80021 00:11:16.066 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 80021 00:11:16.324 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:16.324 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:16.324 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:16.324 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:11:16.324 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:11:16.324 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:16.324 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:11:16.324 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:16.324 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:16.324 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:16.324 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:16.324 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:16.324 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:16.324 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:16.324 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:16.324 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:16.324 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:16.324 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:16.324 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:16.324 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:16.582 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:16.582 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:16.582 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:16.582 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.582 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.582 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.582 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:11:16.582 00:11:16.582 real 0m39.131s 00:11:16.582 user 1m3.529s 00:11:16.582 sys 0m8.170s 00:11:16.582 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.582 ************************************ 00:11:16.582 END TEST nvmf_lvs_grow 00:11:16.582 ************************************ 00:11:16.582 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:16.582 06:13:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:16.582 06:13:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:16.582 06:13:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.582 06:13:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:16.582 ************************************ 00:11:16.582 START TEST nvmf_bdev_io_wait 00:11:16.582 ************************************ 00:11:16.582 06:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:16.582 * Looking for test storage... 00:11:16.582 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:16.582 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:16.582 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:11:16.582 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:16.842 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:16.842 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:16.842 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:16.842 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:16.842 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:11:16.842 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:11:16.842 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:11:16.842 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:11:16.842 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:11:16.842 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:11:16.842 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:11:16.842 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:16.842 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:11:16.842 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:11:16.842 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:16.842 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:16.842 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:11:16.842 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:11:16.842 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:16.842 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:11:16.842 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:11:16.842 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:11:16.842 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:11:16.842 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:16.842 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:11:16.842 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:11:16.842 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:16.842 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:16.842 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:11:16.842 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:16.842 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:16.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.842 --rc genhtml_branch_coverage=1 00:11:16.842 --rc genhtml_function_coverage=1 00:11:16.842 --rc genhtml_legend=1 00:11:16.842 --rc geninfo_all_blocks=1 00:11:16.842 --rc geninfo_unexecuted_blocks=1 00:11:16.842 00:11:16.842 ' 00:11:16.842 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:16.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.842 --rc genhtml_branch_coverage=1 00:11:16.842 --rc genhtml_function_coverage=1 00:11:16.842 --rc genhtml_legend=1 00:11:16.842 --rc geninfo_all_blocks=1 00:11:16.842 --rc geninfo_unexecuted_blocks=1 00:11:16.842 00:11:16.842 ' 00:11:16.842 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:16.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.842 --rc genhtml_branch_coverage=1 00:11:16.842 --rc genhtml_function_coverage=1 00:11:16.842 --rc genhtml_legend=1 00:11:16.842 --rc geninfo_all_blocks=1 00:11:16.842 --rc geninfo_unexecuted_blocks=1 00:11:16.842 00:11:16.842 ' 00:11:16.842 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:16.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.842 --rc genhtml_branch_coverage=1 00:11:16.842 --rc genhtml_function_coverage=1 00:11:16.843 --rc genhtml_legend=1 00:11:16.843 --rc geninfo_all_blocks=1 00:11:16.843 --rc geninfo_unexecuted_blocks=1 00:11:16.843 00:11:16.843 ' 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:16.843 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:16.843 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:16.844 Cannot find device "nvmf_init_br" 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:16.844 Cannot find device "nvmf_init_br2" 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:16.844 Cannot find device "nvmf_tgt_br" 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:16.844 Cannot find device "nvmf_tgt_br2" 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:16.844 Cannot find device "nvmf_init_br" 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:16.844 Cannot find device "nvmf_init_br2" 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:16.844 Cannot find device "nvmf_tgt_br" 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:16.844 Cannot find device "nvmf_tgt_br2" 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:16.844 Cannot find device "nvmf_br" 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:16.844 Cannot find device "nvmf_init_if" 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:16.844 Cannot find device "nvmf_init_if2" 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:16.844 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:16.844 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:16.844 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:17.102 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:17.102 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:17.102 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:17.102 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:17.102 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:17.102 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:17.102 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:17.102 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:17.102 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:17.102 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:17.102 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:17.102 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:17.102 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:17.102 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:17.102 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:17.102 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:17.102 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:17.103 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:17.103 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:17.103 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:17.103 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:17.103 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:17.103 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:17.103 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:17.103 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:11:17.103 00:11:17.103 --- 10.0.0.3 ping statistics --- 00:11:17.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.103 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:11:17.103 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:17.103 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:17.103 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:11:17.103 00:11:17.103 --- 10.0.0.4 ping statistics --- 00:11:17.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.103 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:11:17.103 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:17.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:17.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:11:17.103 00:11:17.103 --- 10.0.0.1 ping statistics --- 00:11:17.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.103 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:11:17.103 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:17.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:17.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:11:17.103 00:11:17.103 --- 10.0.0.2 ping statistics --- 00:11:17.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.103 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:11:17.103 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:17.103 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:11:17.103 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:17.103 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:17.103 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:17.103 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:17.103 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:17.103 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:17.103 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:17.103 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:17.103 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:17.103 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:17.103 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:17.103 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=80493 00:11:17.103 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 80493 00:11:17.103 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:17.103 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 80493 ']' 00:11:17.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.103 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.103 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:17.103 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.103 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:17.103 06:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:17.103 [2024-11-27 06:13:36.555057] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:11:17.103 [2024-11-27 06:13:36.555113] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:17.361 [2024-11-27 06:13:36.671535] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:17.361 [2024-11-27 06:13:36.698647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:17.361 [2024-11-27 06:13:36.719900] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:17.361 [2024-11-27 06:13:36.720070] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:17.361 [2024-11-27 06:13:36.720126] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:17.361 [2024-11-27 06:13:36.720152] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:17.361 [2024-11-27 06:13:36.720167] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:17.361 [2024-11-27 06:13:36.721021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.361 [2024-11-27 06:13:36.721252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:17.361 [2024-11-27 06:13:36.722171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:17.361 [2024-11-27 06:13:36.722319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.926 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.926 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:11:17.926 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:17.926 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:17.926 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:18.184 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:18.184 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:18.184 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.184 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:18.184 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.184 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:18.184 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.184 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:18.184 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.184 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:18.184 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.184 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:18.184 [2024-11-27 06:13:37.538474] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:18.184 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.184 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:18.184 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.184 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:18.184 Malloc0 00:11:18.184 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.184 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:18.185 [2024-11-27 06:13:37.589914] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=80546 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=80547 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=80549 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:18.185 { 00:11:18.185 "params": { 00:11:18.185 "name": "Nvme$subsystem", 00:11:18.185 "trtype": "$TEST_TRANSPORT", 00:11:18.185 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:18.185 "adrfam": "ipv4", 00:11:18.185 "trsvcid": "$NVMF_PORT", 00:11:18.185 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:18.185 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:18.185 "hdgst": ${hdgst:-false}, 00:11:18.185 "ddgst": ${ddgst:-false} 00:11:18.185 }, 00:11:18.185 "method": "bdev_nvme_attach_controller" 00:11:18.185 } 00:11:18.185 EOF 00:11:18.185 )") 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:18.185 { 00:11:18.185 "params": { 00:11:18.185 "name": "Nvme$subsystem", 00:11:18.185 "trtype": "$TEST_TRANSPORT", 00:11:18.185 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:18.185 "adrfam": "ipv4", 00:11:18.185 "trsvcid": "$NVMF_PORT", 00:11:18.185 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:18.185 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:18.185 "hdgst": ${hdgst:-false}, 00:11:18.185 "ddgst": ${ddgst:-false} 00:11:18.185 }, 00:11:18.185 "method": "bdev_nvme_attach_controller" 00:11:18.185 } 00:11:18.185 EOF 00:11:18.185 )") 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:18.185 { 00:11:18.185 "params": { 00:11:18.185 "name": "Nvme$subsystem", 00:11:18.185 "trtype": "$TEST_TRANSPORT", 00:11:18.185 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:18.185 "adrfam": "ipv4", 00:11:18.185 "trsvcid": "$NVMF_PORT", 00:11:18.185 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:18.185 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:18.185 "hdgst": ${hdgst:-false}, 00:11:18.185 "ddgst": ${ddgst:-false} 00:11:18.185 }, 00:11:18.185 "method": "bdev_nvme_attach_controller" 00:11:18.185 } 00:11:18.185 EOF 00:11:18.185 )") 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:18.185 { 00:11:18.185 "params": { 00:11:18.185 "name": "Nvme$subsystem", 00:11:18.185 "trtype": "$TEST_TRANSPORT", 00:11:18.185 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:18.185 "adrfam": "ipv4", 00:11:18.185 "trsvcid": "$NVMF_PORT", 00:11:18.185 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:18.185 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:18.185 "hdgst": ${hdgst:-false}, 00:11:18.185 "ddgst": ${ddgst:-false} 00:11:18.185 }, 00:11:18.185 "method": "bdev_nvme_attach_controller" 00:11:18.185 } 00:11:18.185 EOF 00:11:18.185 )") 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:18.185 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:18.185 "params": { 00:11:18.185 "name": "Nvme1", 00:11:18.185 "trtype": "tcp", 00:11:18.185 "traddr": "10.0.0.3", 00:11:18.185 "adrfam": "ipv4", 00:11:18.185 "trsvcid": "4420", 00:11:18.185 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:18.185 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:18.186 "hdgst": false, 00:11:18.186 "ddgst": false 00:11:18.186 }, 00:11:18.186 "method": "bdev_nvme_attach_controller" 00:11:18.186 }' 00:11:18.186 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=80552 00:11:18.186 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:11:18.186 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:18.186 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:18.186 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:18.186 "params": { 00:11:18.186 "name": "Nvme1", 00:11:18.186 "trtype": "tcp", 00:11:18.186 "traddr": "10.0.0.3", 00:11:18.186 "adrfam": "ipv4", 00:11:18.186 "trsvcid": "4420", 00:11:18.186 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:18.186 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:18.186 "hdgst": false, 00:11:18.186 "ddgst": false 00:11:18.186 }, 00:11:18.186 "method": "bdev_nvme_attach_controller" 00:11:18.186 }' 00:11:18.186 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:18.186 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:18.186 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:18.186 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:18.186 "params": { 00:11:18.186 "name": "Nvme1", 00:11:18.186 "trtype": "tcp", 00:11:18.186 "traddr": "10.0.0.3", 00:11:18.186 "adrfam": "ipv4", 00:11:18.186 "trsvcid": "4420", 00:11:18.186 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:18.186 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:18.186 "hdgst": false, 00:11:18.186 "ddgst": false 00:11:18.186 }, 00:11:18.186 "method": "bdev_nvme_attach_controller" 00:11:18.186 }' 00:11:18.186 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:18.186 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:18.186 "params": { 00:11:18.186 "name": "Nvme1", 00:11:18.186 "trtype": "tcp", 00:11:18.186 "traddr": "10.0.0.3", 00:11:18.186 "adrfam": "ipv4", 00:11:18.186 "trsvcid": "4420", 00:11:18.186 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:18.186 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:18.186 "hdgst": false, 00:11:18.186 "ddgst": false 00:11:18.186 }, 00:11:18.186 "method": "bdev_nvme_attach_controller" 00:11:18.186 }' 00:11:18.186 [2024-11-27 06:13:37.636567] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:11:18.186 [2024-11-27 06:13:37.636620] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:11:18.186 [2024-11-27 06:13:37.644588] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:11:18.186 [2024-11-27 06:13:37.644641] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:18.186 [2024-11-27 06:13:37.645663] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:11:18.186 [2024-11-27 06:13:37.645715] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:18.186 [2024-11-27 06:13:37.648266] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:11:18.186 [2024-11-27 06:13:37.648318] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:18.186 06:13:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 80546 00:11:18.445 [2024-11-27 06:13:37.787818] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:18.445 [2024-11-27 06:13:37.814550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.445 [2024-11-27 06:13:37.826411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:18.445 [2024-11-27 06:13:37.828084] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:18.445 [2024-11-27 06:13:37.857222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.445 [2024-11-27 06:13:37.867386] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:18.445 [2024-11-27 06:13:37.869321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:11:18.445 [2024-11-27 06:13:37.897171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.445 [2024-11-27 06:13:37.909203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:18.445 [2024-11-27 06:13:37.910818] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:18.445 [2024-11-27 06:13:37.938084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.445 Running I/O for 1 seconds... 00:11:18.445 [2024-11-27 06:13:37.950026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:18.704 Running I/O for 1 seconds... 00:11:18.704 Running I/O for 1 seconds... 00:11:18.704 Running I/O for 1 seconds... 00:11:19.752 7924.00 IOPS, 30.95 MiB/s 00:11:19.752 Latency(us) 00:11:19.752 [2024-11-27T06:13:39.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:19.752 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:19.752 Nvme1n1 : 1.02 7903.23 30.87 0.00 0.00 16035.67 8065.97 27021.00 00:11:19.752 [2024-11-27T06:13:39.282Z] =================================================================================================================== 00:11:19.752 [2024-11-27T06:13:39.282Z] Total : 7903.23 30.87 0.00 0.00 16035.67 8065.97 27021.00 00:11:19.752 12390.00 IOPS, 48.40 MiB/s 00:11:19.752 Latency(us) 00:11:19.752 [2024-11-27T06:13:39.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:19.752 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:19.752 Nvme1n1 : 1.01 12464.60 48.69 0.00 0.00 10239.05 4688.34 20064.10 00:11:19.752 [2024-11-27T06:13:39.282Z] =================================================================================================================== 00:11:19.752 [2024-11-27T06:13:39.282Z] Total : 12464.60 48.69 0.00 0.00 10239.05 4688.34 20064.10 00:11:19.752 193904.00 IOPS, 757.44 MiB/s 00:11:19.752 Latency(us) 00:11:19.752 [2024-11-27T06:13:39.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:19.752 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:19.752 Nvme1n1 : 1.00 193540.40 756.02 0.00 0.00 657.35 281.99 1852.65 00:11:19.752 [2024-11-27T06:13:39.282Z] =================================================================================================================== 00:11:19.752 [2024-11-27T06:13:39.282Z] Total : 193540.40 756.02 0.00 0.00 657.35 281.99 1852.65 00:11:19.752 8080.00 IOPS, 31.56 MiB/s 00:11:19.752 Latency(us) 00:11:19.752 [2024-11-27T06:13:39.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:19.752 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:19.752 Nvme1n1 : 1.01 8184.07 31.97 0.00 0.00 15603.16 3175.98 32263.88 00:11:19.752 [2024-11-27T06:13:39.282Z] =================================================================================================================== 00:11:19.752 [2024-11-27T06:13:39.282Z] Total : 8184.07 31.97 0.00 0.00 15603.16 3175.98 32263.88 00:11:19.752 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 80547 00:11:19.752 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 80549 00:11:19.752 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 80552 00:11:19.752 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:19.752 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.752 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:19.752 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.752 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:19.752 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:19.752 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:19.752 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:11:19.752 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:19.752 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:11:19.752 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:19.752 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:19.752 rmmod nvme_tcp 00:11:19.752 rmmod nvme_fabrics 00:11:19.752 rmmod nvme_keyring 00:11:19.752 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:19.752 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:11:19.752 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:11:19.752 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 80493 ']' 00:11:19.752 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 80493 00:11:19.752 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 80493 ']' 00:11:19.752 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 80493 00:11:19.752 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:11:19.752 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:19.752 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80493 00:11:19.752 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:19.752 killing process with pid 80493 00:11:19.752 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:19.752 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80493' 00:11:19.752 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 80493 00:11:19.752 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 80493 00:11:20.010 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:20.010 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:20.010 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:20.010 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:11:20.010 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:11:20.010 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:20.010 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:11:20.010 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:20.010 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:20.010 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:20.010 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:20.010 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:20.010 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:20.010 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:20.010 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:20.010 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:20.010 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:20.010 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:20.010 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:20.010 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:20.268 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:20.268 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:20.268 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:20.268 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.268 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.268 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.268 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:11:20.268 00:11:20.268 real 0m3.607s 00:11:20.268 user 0m15.300s 00:11:20.268 sys 0m1.385s 00:11:20.268 ************************************ 00:11:20.268 END TEST nvmf_bdev_io_wait 00:11:20.268 ************************************ 00:11:20.268 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.268 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:20.268 06:13:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:20.268 06:13:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:20.268 06:13:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.268 06:13:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:20.268 ************************************ 00:11:20.268 START TEST nvmf_queue_depth 00:11:20.268 ************************************ 00:11:20.268 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:20.268 * Looking for test storage... 00:11:20.268 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:20.268 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:20.268 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:11:20.268 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:20.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.533 --rc genhtml_branch_coverage=1 00:11:20.533 --rc genhtml_function_coverage=1 00:11:20.533 --rc genhtml_legend=1 00:11:20.533 --rc geninfo_all_blocks=1 00:11:20.533 --rc geninfo_unexecuted_blocks=1 00:11:20.533 00:11:20.533 ' 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:20.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.533 --rc genhtml_branch_coverage=1 00:11:20.533 --rc genhtml_function_coverage=1 00:11:20.533 --rc genhtml_legend=1 00:11:20.533 --rc geninfo_all_blocks=1 00:11:20.533 --rc geninfo_unexecuted_blocks=1 00:11:20.533 00:11:20.533 ' 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:20.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.533 --rc genhtml_branch_coverage=1 00:11:20.533 --rc genhtml_function_coverage=1 00:11:20.533 --rc genhtml_legend=1 00:11:20.533 --rc geninfo_all_blocks=1 00:11:20.533 --rc geninfo_unexecuted_blocks=1 00:11:20.533 00:11:20.533 ' 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:20.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.533 --rc genhtml_branch_coverage=1 00:11:20.533 --rc genhtml_function_coverage=1 00:11:20.533 --rc genhtml_legend=1 00:11:20.533 --rc geninfo_all_blocks=1 00:11:20.533 --rc geninfo_unexecuted_blocks=1 00:11:20.533 00:11:20.533 ' 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.533 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:20.534 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:20.534 Cannot find device "nvmf_init_br" 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:20.534 Cannot find device "nvmf_init_br2" 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:20.534 Cannot find device "nvmf_tgt_br" 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:20.534 Cannot find device "nvmf_tgt_br2" 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:20.534 Cannot find device "nvmf_init_br" 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:20.534 Cannot find device "nvmf_init_br2" 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:20.534 Cannot find device "nvmf_tgt_br" 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:20.534 Cannot find device "nvmf_tgt_br2" 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:20.534 Cannot find device "nvmf_br" 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:20.534 Cannot find device "nvmf_init_if" 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:20.534 Cannot find device "nvmf_init_if2" 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:20.534 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:20.534 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:20.534 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:20.534 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:20.534 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:20.793 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:20.793 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:11:20.793 00:11:20.793 --- 10.0.0.3 ping statistics --- 00:11:20.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.793 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:20.793 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:20.793 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.076 ms 00:11:20.793 00:11:20.793 --- 10.0.0.4 ping statistics --- 00:11:20.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.793 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:20.793 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:20.793 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:11:20.793 00:11:20.793 --- 10.0.0.1 ping statistics --- 00:11:20.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.793 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:20.793 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:20.793 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:11:20.793 00:11:20.793 --- 10.0.0.2 ping statistics --- 00:11:20.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.793 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=80807 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 80807 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 80807 ']' 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:20.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:20.793 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:20.793 [2024-11-27 06:13:40.264817] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:11:20.793 [2024-11-27 06:13:40.264871] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.051 [2024-11-27 06:13:40.382450] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:21.051 [2024-11-27 06:13:40.407734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.051 [2024-11-27 06:13:40.426829] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.051 [2024-11-27 06:13:40.426871] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.051 [2024-11-27 06:13:40.426882] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.051 [2024-11-27 06:13:40.426889] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.051 [2024-11-27 06:13:40.426897] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.051 [2024-11-27 06:13:40.427175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.613 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:21.613 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:11:21.613 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:21.613 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:21.613 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:21.871 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:21.871 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:21.871 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.871 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:21.871 [2024-11-27 06:13:41.198662] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:21.871 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.871 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:21.871 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.871 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:21.871 Malloc0 00:11:21.871 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.871 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:21.871 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.871 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:21.871 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.871 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:21.871 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.871 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:21.871 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.871 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:21.871 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.871 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:21.871 [2024-11-27 06:13:41.241895] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:21.871 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.871 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=80857 00:11:21.871 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:21.871 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 80857 /var/tmp/bdevperf.sock 00:11:21.871 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 80857 ']' 00:11:21.871 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:21.871 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:21.871 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:21.871 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:21.871 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.871 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:21.871 [2024-11-27 06:13:41.285146] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:11:21.871 [2024-11-27 06:13:41.285204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80857 ] 00:11:22.128 [2024-11-27 06:13:41.401086] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:22.129 [2024-11-27 06:13:41.427689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.129 [2024-11-27 06:13:41.447761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.693 06:13:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:22.693 06:13:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:11:22.693 06:13:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:22.693 06:13:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.693 06:13:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:22.951 NVMe0n1 00:11:22.951 06:13:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.951 06:13:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:22.951 Running I/O for 10 seconds... 00:11:24.831 9032.00 IOPS, 35.28 MiB/s [2024-11-27T06:13:45.747Z] 9200.50 IOPS, 35.94 MiB/s [2024-11-27T06:13:46.689Z] 9208.00 IOPS, 35.97 MiB/s [2024-11-27T06:13:47.625Z] 9210.00 IOPS, 35.98 MiB/s [2024-11-27T06:13:48.558Z] 9284.00 IOPS, 36.27 MiB/s [2024-11-27T06:13:49.540Z] 9380.67 IOPS, 36.64 MiB/s [2024-11-27T06:13:50.517Z] 9489.43 IOPS, 37.07 MiB/s [2024-11-27T06:13:51.448Z] 9524.00 IOPS, 37.20 MiB/s [2024-11-27T06:13:52.381Z] 9582.22 IOPS, 37.43 MiB/s [2024-11-27T06:13:52.639Z] 9624.60 IOPS, 37.60 MiB/s 00:11:33.109 Latency(us) 00:11:33.109 [2024-11-27T06:13:52.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:33.109 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:33.109 Verification LBA range: start 0x0 length 0x4000 00:11:33.109 NVMe0n1 : 10.07 9662.71 37.74 0.00 0.00 105557.20 19459.15 70577.23 00:11:33.109 [2024-11-27T06:13:52.640Z] =================================================================================================================== 00:11:33.110 [2024-11-27T06:13:52.640Z] Total : 9662.71 37.74 0.00 0.00 105557.20 19459.15 70577.23 00:11:33.110 { 00:11:33.110 "results": [ 00:11:33.110 { 00:11:33.110 "job": "NVMe0n1", 00:11:33.110 "core_mask": "0x1", 00:11:33.110 "workload": "verify", 00:11:33.110 "status": "finished", 00:11:33.110 "verify_range": { 00:11:33.110 "start": 0, 00:11:33.110 "length": 16384 00:11:33.110 }, 00:11:33.110 "queue_depth": 1024, 00:11:33.110 "io_size": 4096, 00:11:33.110 "runtime": 10.066535, 00:11:33.110 "iops": 9662.70916457351, 00:11:33.110 "mibps": 37.74495767411528, 00:11:33.110 "io_failed": 0, 00:11:33.110 "io_timeout": 0, 00:11:33.110 "avg_latency_us": 105557.20331726913, 00:11:33.110 "min_latency_us": 19459.150769230768, 00:11:33.110 "max_latency_us": 70577.23076923077 00:11:33.110 } 00:11:33.110 ], 00:11:33.110 "core_count": 1 00:11:33.110 } 00:11:33.110 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 80857 00:11:33.110 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 80857 ']' 00:11:33.110 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 80857 00:11:33.110 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:11:33.110 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:33.110 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80857 00:11:33.110 killing process with pid 80857 00:11:33.110 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:33.110 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:33.110 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80857' 00:11:33.110 Received shutdown signal, test time was about 10.000000 seconds 00:11:33.110 00:11:33.110 Latency(us) 00:11:33.110 [2024-11-27T06:13:52.640Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:33.110 [2024-11-27T06:13:52.640Z] =================================================================================================================== 00:11:33.110 [2024-11-27T06:13:52.640Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:33.110 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 80857 00:11:33.110 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 80857 00:11:33.110 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:33.110 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:33.110 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:33.110 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:11:33.110 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:33.110 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:11:33.110 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:33.110 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:33.110 rmmod nvme_tcp 00:11:33.110 rmmod nvme_fabrics 00:11:33.369 rmmod nvme_keyring 00:11:33.369 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:33.369 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:11:33.369 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:11:33.369 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 80807 ']' 00:11:33.369 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 80807 00:11:33.369 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 80807 ']' 00:11:33.369 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 80807 00:11:33.369 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:11:33.369 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:33.369 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80807 00:11:33.369 killing process with pid 80807 00:11:33.369 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:33.369 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:33.369 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80807' 00:11:33.369 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 80807 00:11:33.369 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 80807 00:11:33.369 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:33.369 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:33.369 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:33.369 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:11:33.369 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:11:33.369 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:11:33.369 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:33.369 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:33.369 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:33.369 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:33.369 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:33.369 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:33.369 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:33.369 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:33.369 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:33.369 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:33.369 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:33.369 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:33.627 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:33.627 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:33.627 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:33.627 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:33.627 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:33.627 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.628 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.628 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.628 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:11:33.628 00:11:33.628 real 0m13.362s 00:11:33.628 user 0m23.259s 00:11:33.628 sys 0m1.551s 00:11:33.628 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.628 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:33.628 ************************************ 00:11:33.628 END TEST nvmf_queue_depth 00:11:33.628 ************************************ 00:11:33.628 06:13:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:33.628 06:13:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:33.628 06:13:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.628 06:13:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:33.628 ************************************ 00:11:33.628 START TEST nvmf_target_multipath 00:11:33.628 ************************************ 00:11:33.628 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:33.887 * Looking for test storage... 00:11:33.887 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:33.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.887 --rc genhtml_branch_coverage=1 00:11:33.887 --rc genhtml_function_coverage=1 00:11:33.887 --rc genhtml_legend=1 00:11:33.887 --rc geninfo_all_blocks=1 00:11:33.887 --rc geninfo_unexecuted_blocks=1 00:11:33.887 00:11:33.887 ' 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:33.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.887 --rc genhtml_branch_coverage=1 00:11:33.887 --rc genhtml_function_coverage=1 00:11:33.887 --rc genhtml_legend=1 00:11:33.887 --rc geninfo_all_blocks=1 00:11:33.887 --rc geninfo_unexecuted_blocks=1 00:11:33.887 00:11:33.887 ' 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:33.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.887 --rc genhtml_branch_coverage=1 00:11:33.887 --rc genhtml_function_coverage=1 00:11:33.887 --rc genhtml_legend=1 00:11:33.887 --rc geninfo_all_blocks=1 00:11:33.887 --rc geninfo_unexecuted_blocks=1 00:11:33.887 00:11:33.887 ' 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:33.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.887 --rc genhtml_branch_coverage=1 00:11:33.887 --rc genhtml_function_coverage=1 00:11:33.887 --rc genhtml_legend=1 00:11:33.887 --rc geninfo_all_blocks=1 00:11:33.887 --rc geninfo_unexecuted_blocks=1 00:11:33.887 00:11:33.887 ' 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:33.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:33.888 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:33.888 Cannot find device "nvmf_init_br" 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:33.888 Cannot find device "nvmf_init_br2" 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:33.888 Cannot find device "nvmf_tgt_br" 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:33.888 Cannot find device "nvmf_tgt_br2" 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:33.888 Cannot find device "nvmf_init_br" 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:33.888 Cannot find device "nvmf_init_br2" 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:11:33.888 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:33.888 Cannot find device "nvmf_tgt_br" 00:11:33.889 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:11:33.889 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:33.889 Cannot find device "nvmf_tgt_br2" 00:11:33.889 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:11:33.889 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:33.889 Cannot find device "nvmf_br" 00:11:33.889 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:11:33.889 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:33.889 Cannot find device "nvmf_init_if" 00:11:33.889 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:11:33.889 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:33.889 Cannot find device "nvmf_init_if2" 00:11:33.889 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:11:33.889 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:33.889 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:33.889 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:11:33.889 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:33.889 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:33.889 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:11:33.889 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:33.889 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:33.889 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:33.889 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:33.889 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:34.147 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:34.147 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:11:34.147 00:11:34.147 --- 10.0.0.3 ping statistics --- 00:11:34.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.147 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:34.147 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:34.147 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:11:34.147 00:11:34.147 --- 10.0.0.4 ping statistics --- 00:11:34.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.147 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:34.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:34.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:11:34.147 00:11:34.147 --- 10.0.0.1 ping statistics --- 00:11:34.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.147 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:34.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:34.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:11:34.147 00:11:34.147 --- 10.0.0.2 ping statistics --- 00:11:34.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.147 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=81243 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 81243 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 81243 ']' 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:34.147 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.148 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:34.148 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:34.148 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:34.405 [2024-11-27 06:13:53.696188] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:11:34.405 [2024-11-27 06:13:53.696243] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.405 [2024-11-27 06:13:53.817829] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:34.405 [2024-11-27 06:13:53.846220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:34.405 [2024-11-27 06:13:53.867378] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:34.405 [2024-11-27 06:13:53.867415] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:34.405 [2024-11-27 06:13:53.867422] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:34.405 [2024-11-27 06:13:53.867428] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:34.405 [2024-11-27 06:13:53.867434] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:34.405 [2024-11-27 06:13:53.868464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.405 [2024-11-27 06:13:53.869268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:34.405 [2024-11-27 06:13:53.869700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:34.405 [2024-11-27 06:13:53.869807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.337 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:35.337 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:11:35.337 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:35.337 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:35.337 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:35.337 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:35.337 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:35.337 [2024-11-27 06:13:54.782053] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:35.337 06:13:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:11:35.595 Malloc0 00:11:35.595 06:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:11:35.853 06:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:36.111 06:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:36.368 [2024-11-27 06:13:55.643680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:36.368 06:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:11:36.368 [2024-11-27 06:13:55.843878] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:11:36.368 06:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:11:36.625 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:11:36.882 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:11:36.882 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:11:36.882 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:36.882 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:36.882 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:11:38.777 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:38.777 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:38.777 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:39.034 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:39.034 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:39.034 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:11:39.034 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:11:39.034 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:11:39.034 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:11:39.034 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:39.034 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:11:39.034 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:11:39.034 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:11:39.034 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:11:39.034 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:11:39.034 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:11:39.034 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:11:39.034 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:11:39.034 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:11:39.034 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:11:39.034 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:39.034 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:39.034 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:39.034 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:39.034 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:39.034 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:11:39.034 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:39.034 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:39.034 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:39.034 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:39.034 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:39.034 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:11:39.034 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=81377 00:11:39.034 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:11:39.034 06:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:39.034 [global] 00:11:39.034 thread=1 00:11:39.034 invalidate=1 00:11:39.034 rw=randrw 00:11:39.034 time_based=1 00:11:39.034 runtime=6 00:11:39.034 ioengine=libaio 00:11:39.034 direct=1 00:11:39.034 bs=4096 00:11:39.034 iodepth=128 00:11:39.034 norandommap=0 00:11:39.034 numjobs=1 00:11:39.034 00:11:39.034 verify_dump=1 00:11:39.034 verify_backlog=512 00:11:39.034 verify_state_save=0 00:11:39.034 do_verify=1 00:11:39.034 verify=crc32c-intel 00:11:39.034 [job0] 00:11:39.034 filename=/dev/nvme0n1 00:11:39.034 Could not set queue depth (nvme0n1) 00:11:39.034 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:39.034 fio-3.35 00:11:39.034 Starting 1 thread 00:11:39.964 06:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:40.220 06:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:11:40.477 06:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:11:40.477 06:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:40.477 06:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:40.477 06:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:40.477 06:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:40.477 06:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:40.477 06:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:11:40.477 06:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:40.477 06:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:40.477 06:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:40.477 06:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:40.477 06:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:40.477 06:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:41.407 06:14:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:41.407 06:14:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:41.407 06:14:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:41.407 06:14:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:41.667 06:14:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:11:41.925 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:11:41.925 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:41.925 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:41.925 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:41.925 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:41.925 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:41.925 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:11:41.925 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:41.925 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:41.925 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:41.926 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:41.926 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:41.926 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:42.859 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:42.859 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:42.859 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:42.859 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 81377 00:11:45.396 00:11:45.396 job0: (groupid=0, jobs=1): err= 0: pid=81403: Wed Nov 27 06:14:04 2024 00:11:45.396 read: IOPS=11.6k, BW=45.3MiB/s (47.5MB/s)(272MiB/6006msec) 00:11:45.396 slat (usec): min=2, max=11612, avg=51.38, stdev=250.89 00:11:45.396 clat (usec): min=1878, max=21205, avg=7562.53, stdev=1173.37 00:11:45.396 lat (usec): min=1886, max=21229, avg=7613.91, stdev=1185.68 00:11:45.396 clat percentiles (usec): 00:11:45.396 | 1.00th=[ 4621], 5.00th=[ 5932], 10.00th=[ 6456], 20.00th=[ 6783], 00:11:45.396 | 30.00th=[ 7046], 40.00th=[ 7242], 50.00th=[ 7439], 60.00th=[ 7701], 00:11:45.396 | 70.00th=[ 7963], 80.00th=[ 8356], 90.00th=[ 8848], 95.00th=[ 9372], 00:11:45.396 | 99.00th=[11076], 99.50th=[11469], 99.90th=[19006], 99.95th=[19268], 00:11:45.396 | 99.99th=[19530] 00:11:45.396 bw ( KiB/s): min=12480, max=32304, per=52.28%, avg=24262.55, stdev=6437.29, samples=11 00:11:45.396 iops : min= 3120, max= 8076, avg=6065.64, stdev=1609.32, samples=11 00:11:45.396 write: IOPS=6971, BW=27.2MiB/s (28.6MB/s)(145MiB/5311msec); 0 zone resets 00:11:45.396 slat (usec): min=4, max=3249, avg=58.74, stdev=165.79 00:11:45.396 clat (usec): min=1233, max=12512, avg=6348.66, stdev=897.48 00:11:45.396 lat (usec): min=1255, max=12532, avg=6407.40, stdev=902.09 00:11:45.396 clat percentiles (usec): 00:11:45.396 | 1.00th=[ 3523], 5.00th=[ 5014], 10.00th=[ 5407], 20.00th=[ 5800], 00:11:45.396 | 30.00th=[ 6063], 40.00th=[ 6194], 50.00th=[ 6390], 60.00th=[ 6521], 00:11:45.396 | 70.00th=[ 6718], 80.00th=[ 6915], 90.00th=[ 7242], 95.00th=[ 7570], 00:11:45.396 | 99.00th=[ 9110], 99.50th=[ 9765], 99.90th=[11076], 99.95th=[11469], 00:11:45.396 | 99.99th=[12256] 00:11:45.396 bw ( KiB/s): min=13056, max=31464, per=87.03%, avg=24270.55, stdev=6154.51, samples=11 00:11:45.396 iops : min= 3264, max= 7866, avg=6067.64, stdev=1538.63, samples=11 00:11:45.396 lat (msec) : 2=0.02%, 4=0.86%, 10=97.09%, 20=2.03%, 50=0.01% 00:11:45.396 cpu : usr=3.70%, sys=17.59%, ctx=6232, majf=0, minf=151 00:11:45.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:11:45.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:45.396 issued rwts: total=69678,37028,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.396 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:45.396 00:11:45.396 Run status group 0 (all jobs): 00:11:45.396 READ: bw=45.3MiB/s (47.5MB/s), 45.3MiB/s-45.3MiB/s (47.5MB/s-47.5MB/s), io=272MiB (285MB), run=6006-6006msec 00:11:45.396 WRITE: bw=27.2MiB/s (28.6MB/s), 27.2MiB/s-27.2MiB/s (28.6MB/s-28.6MB/s), io=145MiB (152MB), run=5311-5311msec 00:11:45.396 00:11:45.396 Disk stats (read/write): 00:11:45.396 nvme0n1: ios=68879/36045, merge=0/0, ticks=494817/216180, in_queue=710997, util=98.61% 00:11:45.396 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:11:45.396 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:11:45.658 06:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:11:45.658 06:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:45.658 06:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:45.658 06:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:45.658 06:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:45.658 06:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:45.658 06:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:11:45.658 06:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:45.658 06:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:45.658 06:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:45.658 06:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:45.658 06:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:11:45.658 06:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:46.599 06:14:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:46.599 06:14:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:46.599 06:14:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:46.599 06:14:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:11:46.599 06:14:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=81534 00:11:46.599 06:14:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:46.599 06:14:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:11:46.599 [global] 00:11:46.599 thread=1 00:11:46.599 invalidate=1 00:11:46.599 rw=randrw 00:11:46.599 time_based=1 00:11:46.599 runtime=6 00:11:46.599 ioengine=libaio 00:11:46.599 direct=1 00:11:46.599 bs=4096 00:11:46.599 iodepth=128 00:11:46.599 norandommap=0 00:11:46.599 numjobs=1 00:11:46.599 00:11:46.599 verify_dump=1 00:11:46.599 verify_backlog=512 00:11:46.599 verify_state_save=0 00:11:46.599 do_verify=1 00:11:46.599 verify=crc32c-intel 00:11:46.599 [job0] 00:11:46.599 filename=/dev/nvme0n1 00:11:46.599 Could not set queue depth (nvme0n1) 00:11:46.860 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:46.860 fio-3.35 00:11:46.860 Starting 1 thread 00:11:47.876 06:14:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:47.876 06:14:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:11:48.137 06:14:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:11:48.137 06:14:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:48.137 06:14:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:48.137 06:14:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:48.137 06:14:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:48.137 06:14:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:48.137 06:14:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:11:48.137 06:14:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:48.137 06:14:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:48.137 06:14:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:48.137 06:14:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:48.137 06:14:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:48.137 06:14:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:49.081 06:14:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:49.081 06:14:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:49.081 06:14:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:49.081 06:14:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:49.342 06:14:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:11:49.604 06:14:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:11:49.604 06:14:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:49.604 06:14:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:49.604 06:14:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:49.604 06:14:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:49.604 06:14:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:49.604 06:14:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:11:49.604 06:14:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:49.604 06:14:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:49.604 06:14:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:49.604 06:14:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:49.604 06:14:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:49.604 06:14:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:50.553 06:14:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:50.553 06:14:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:50.553 06:14:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:50.553 06:14:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 81534 00:11:53.101 00:11:53.101 job0: (groupid=0, jobs=1): err= 0: pid=81555: Wed Nov 27 06:14:12 2024 00:11:53.101 read: IOPS=13.0k, BW=50.9MiB/s (53.4MB/s)(306MiB/6004msec) 00:11:53.101 slat (usec): min=2, max=7729, avg=41.59, stdev=226.29 00:11:53.101 clat (usec): min=133, max=17731, avg=6971.11, stdev=1784.90 00:11:53.101 lat (usec): min=149, max=17768, avg=7012.70, stdev=1805.16 00:11:53.101 clat percentiles (usec): 00:11:53.101 | 1.00th=[ 1713], 5.00th=[ 3720], 10.00th=[ 4555], 20.00th=[ 5604], 00:11:53.101 | 30.00th=[ 6587], 40.00th=[ 6980], 50.00th=[ 7177], 60.00th=[ 7439], 00:11:53.101 | 70.00th=[ 7832], 80.00th=[ 8225], 90.00th=[ 8717], 95.00th=[ 9372], 00:11:53.101 | 99.00th=[11076], 99.50th=[11863], 99.90th=[15270], 99.95th=[15795], 00:11:53.101 | 99.99th=[15926] 00:11:53.101 bw ( KiB/s): min=13200, max=45864, per=54.28%, avg=28286.09, stdev=9609.94, samples=11 00:11:53.101 iops : min= 3300, max=11466, avg=7071.45, stdev=2402.47, samples=11 00:11:53.101 write: IOPS=8026, BW=31.4MiB/s (32.9MB/s)(153MiB/4880msec); 0 zone resets 00:11:53.101 slat (usec): min=7, max=1871, avg=47.22, stdev=140.42 00:11:53.101 clat (usec): min=126, max=12726, avg=5536.82, stdev=1688.18 00:11:53.102 lat (usec): min=146, max=12761, avg=5584.04, stdev=1701.94 00:11:53.102 clat percentiles (usec): 00:11:53.102 | 1.00th=[ 1037], 5.00th=[ 2573], 10.00th=[ 3064], 20.00th=[ 3785], 00:11:53.102 | 30.00th=[ 4817], 40.00th=[ 5735], 50.00th=[ 6063], 60.00th=[ 6325], 00:11:53.102 | 70.00th=[ 6521], 80.00th=[ 6783], 90.00th=[ 7111], 95.00th=[ 7570], 00:11:53.102 | 99.00th=[ 9372], 99.50th=[ 9896], 99.90th=[11338], 99.95th=[11731], 00:11:53.102 | 99.99th=[12125] 00:11:53.102 bw ( KiB/s): min=13576, max=46464, per=88.13%, avg=28294.27, stdev=9410.32, samples=11 00:11:53.102 iops : min= 3394, max=11616, avg=7073.55, stdev=2352.58, samples=11 00:11:53.102 lat (usec) : 250=0.02%, 500=0.15%, 750=0.24%, 1000=0.22% 00:11:53.102 lat (msec) : 2=1.06%, 4=10.18%, 10=86.02%, 20=2.10% 00:11:53.102 cpu : usr=4.01%, sys=18.62%, ctx=7035, majf=0, minf=151 00:11:53.102 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:11:53.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.102 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:53.102 issued rwts: total=78224,39167,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:53.102 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:53.102 00:11:53.102 Run status group 0 (all jobs): 00:11:53.102 READ: bw=50.9MiB/s (53.4MB/s), 50.9MiB/s-50.9MiB/s (53.4MB/s-53.4MB/s), io=306MiB (320MB), run=6004-6004msec 00:11:53.102 WRITE: bw=31.4MiB/s (32.9MB/s), 31.4MiB/s-31.4MiB/s (32.9MB/s-32.9MB/s), io=153MiB (160MB), run=4880-4880msec 00:11:53.102 00:11:53.102 Disk stats (read/write): 00:11:53.102 nvme0n1: ios=76540/39167, merge=0/0, ticks=503255/202234, in_queue=705489, util=98.73% 00:11:53.102 06:14:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:53.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:53.102 06:14:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:53.102 06:14:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:11:53.102 06:14:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:53.102 06:14:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.102 06:14:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.102 06:14:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:53.102 06:14:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:11:53.102 06:14:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:53.364 06:14:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:11:53.364 06:14:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:11:53.364 06:14:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:11:53.365 06:14:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:11:53.365 06:14:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:53.365 06:14:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:53.365 06:14:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:53.365 06:14:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:53.365 06:14:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:53.365 06:14:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:53.365 rmmod nvme_tcp 00:11:53.365 rmmod nvme_fabrics 00:11:53.365 rmmod nvme_keyring 00:11:53.365 06:14:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:53.365 06:14:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:53.365 06:14:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:53.365 06:14:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 81243 ']' 00:11:53.365 06:14:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 81243 00:11:53.365 06:14:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 81243 ']' 00:11:53.365 06:14:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 81243 00:11:53.365 06:14:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:11:53.365 06:14:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:53.365 06:14:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81243 00:11:53.365 killing process with pid 81243 00:11:53.365 06:14:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:53.365 06:14:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:53.365 06:14:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81243' 00:11:53.365 06:14:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 81243 00:11:53.365 06:14:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 81243 00:11:53.626 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:53.626 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:53.627 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:53.627 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:53.627 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:11:53.627 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:53.627 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:11:53.627 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:53.627 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:53.627 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:53.627 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:53.627 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:53.627 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:53.627 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:53.627 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:53.627 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:53.627 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:53.627 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:53.627 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:53.888 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:53.888 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:53.888 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:53.888 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:53.889 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.889 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:53.889 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.889 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:11:53.889 00:11:53.889 real 0m20.145s 00:11:53.889 user 1m18.216s 00:11:53.889 sys 0m4.944s 00:11:53.889 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:53.889 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:53.889 ************************************ 00:11:53.889 END TEST nvmf_target_multipath 00:11:53.889 ************************************ 00:11:53.889 06:14:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:53.889 06:14:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:53.889 06:14:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:53.889 06:14:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:53.889 ************************************ 00:11:53.889 START TEST nvmf_zcopy 00:11:53.889 ************************************ 00:11:53.889 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:53.889 * Looking for test storage... 00:11:53.889 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:53.889 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:53.889 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:11:53.889 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:54.151 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:54.151 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:54.151 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:54.151 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:54.151 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:11:54.151 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:11:54.151 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:11:54.151 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:11:54.151 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:11:54.151 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:11:54.151 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:11:54.151 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:54.151 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:11:54.151 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:11:54.151 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:54.151 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:54.151 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:11:54.151 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:11:54.151 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:54.151 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:11:54.151 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:11:54.151 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:11:54.151 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:11:54.151 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:54.151 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:11:54.151 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:11:54.151 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:54.151 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:54.151 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:11:54.151 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:54.151 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:54.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.151 --rc genhtml_branch_coverage=1 00:11:54.151 --rc genhtml_function_coverage=1 00:11:54.151 --rc genhtml_legend=1 00:11:54.151 --rc geninfo_all_blocks=1 00:11:54.151 --rc geninfo_unexecuted_blocks=1 00:11:54.151 00:11:54.151 ' 00:11:54.151 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:54.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.151 --rc genhtml_branch_coverage=1 00:11:54.151 --rc genhtml_function_coverage=1 00:11:54.151 --rc genhtml_legend=1 00:11:54.151 --rc geninfo_all_blocks=1 00:11:54.151 --rc geninfo_unexecuted_blocks=1 00:11:54.151 00:11:54.151 ' 00:11:54.151 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:54.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.151 --rc genhtml_branch_coverage=1 00:11:54.151 --rc genhtml_function_coverage=1 00:11:54.151 --rc genhtml_legend=1 00:11:54.151 --rc geninfo_all_blocks=1 00:11:54.151 --rc geninfo_unexecuted_blocks=1 00:11:54.151 00:11:54.151 ' 00:11:54.151 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:54.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.151 --rc genhtml_branch_coverage=1 00:11:54.151 --rc genhtml_function_coverage=1 00:11:54.151 --rc genhtml_legend=1 00:11:54.151 --rc geninfo_all_blocks=1 00:11:54.151 --rc geninfo_unexecuted_blocks=1 00:11:54.151 00:11:54.151 ' 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:54.152 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:54.152 Cannot find device "nvmf_init_br" 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:54.152 Cannot find device "nvmf_init_br2" 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:54.152 Cannot find device "nvmf_tgt_br" 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:54.152 Cannot find device "nvmf_tgt_br2" 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:54.152 Cannot find device "nvmf_init_br" 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:54.152 Cannot find device "nvmf_init_br2" 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:54.152 Cannot find device "nvmf_tgt_br" 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:54.152 Cannot find device "nvmf_tgt_br2" 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:54.152 Cannot find device "nvmf_br" 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:54.152 Cannot find device "nvmf_init_if" 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:54.152 Cannot find device "nvmf_init_if2" 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:11:54.152 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:54.153 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:54.153 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:11:54.153 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:54.153 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:54.153 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:11:54.153 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:54.153 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:54.153 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:54.153 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:54.153 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:54.153 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:54.153 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:54.415 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:54.415 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:11:54.415 00:11:54.415 --- 10.0.0.3 ping statistics --- 00:11:54.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.415 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:54.415 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:54.415 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:11:54.415 00:11:54.415 --- 10.0.0.4 ping statistics --- 00:11:54.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.415 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:54.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:54.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:11:54.415 00:11:54.415 --- 10.0.0.1 ping statistics --- 00:11:54.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.415 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:54.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:54.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:11:54.415 00:11:54.415 --- 10.0.0.2 ping statistics --- 00:11:54.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.415 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=81883 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 81883 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 81883 ']' 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:54.415 06:14:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:54.415 [2024-11-27 06:14:13.929611] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:11:54.415 [2024-11-27 06:14:13.929681] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:54.677 [2024-11-27 06:14:14.046686] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:54.677 [2024-11-27 06:14:14.071618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.677 [2024-11-27 06:14:14.104631] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:54.677 [2024-11-27 06:14:14.104855] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:54.677 [2024-11-27 06:14:14.104949] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:54.677 [2024-11-27 06:14:14.104977] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:54.677 [2024-11-27 06:14:14.104993] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:54.677 [2024-11-27 06:14:14.105333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:54.938 [2024-11-27 06:14:14.289936] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:54.938 [2024-11-27 06:14:14.310034] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:54.938 malloc0 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:54.938 { 00:11:54.938 "params": { 00:11:54.938 "name": "Nvme$subsystem", 00:11:54.938 "trtype": "$TEST_TRANSPORT", 00:11:54.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:54.938 "adrfam": "ipv4", 00:11:54.938 "trsvcid": "$NVMF_PORT", 00:11:54.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:54.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:54.938 "hdgst": ${hdgst:-false}, 00:11:54.938 "ddgst": ${ddgst:-false} 00:11:54.938 }, 00:11:54.938 "method": "bdev_nvme_attach_controller" 00:11:54.938 } 00:11:54.938 EOF 00:11:54.938 )") 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:54.938 06:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:54.938 "params": { 00:11:54.938 "name": "Nvme1", 00:11:54.938 "trtype": "tcp", 00:11:54.938 "traddr": "10.0.0.3", 00:11:54.938 "adrfam": "ipv4", 00:11:54.938 "trsvcid": "4420", 00:11:54.938 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:54.938 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:54.938 "hdgst": false, 00:11:54.938 "ddgst": false 00:11:54.938 }, 00:11:54.938 "method": "bdev_nvme_attach_controller" 00:11:54.938 }' 00:11:54.938 [2024-11-27 06:14:14.388601] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:11:54.938 [2024-11-27 06:14:14.388685] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81926 ] 00:11:55.199 [2024-11-27 06:14:14.504362] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:55.199 [2024-11-27 06:14:14.531593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.199 [2024-11-27 06:14:14.566757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.460 Running I/O for 10 seconds... 00:11:57.348 6639.00 IOPS, 51.87 MiB/s [2024-11-27T06:14:17.822Z] 6638.50 IOPS, 51.86 MiB/s [2024-11-27T06:14:18.766Z] 6663.67 IOPS, 52.06 MiB/s [2024-11-27T06:14:20.151Z] 6705.00 IOPS, 52.38 MiB/s [2024-11-27T06:14:21.165Z] 6736.60 IOPS, 52.63 MiB/s [2024-11-27T06:14:22.107Z] 6755.67 IOPS, 52.78 MiB/s [2024-11-27T06:14:23.049Z] 6768.57 IOPS, 52.88 MiB/s [2024-11-27T06:14:23.993Z] 6776.00 IOPS, 52.94 MiB/s [2024-11-27T06:14:24.937Z] 6781.67 IOPS, 52.98 MiB/s [2024-11-27T06:14:24.937Z] 6782.70 IOPS, 52.99 MiB/s 00:12:05.407 Latency(us) 00:12:05.407 [2024-11-27T06:14:24.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:05.407 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:05.407 Verification LBA range: start 0x0 length 0x1000 00:12:05.407 Nvme1n1 : 10.01 6783.95 53.00 0.00 0.00 18813.73 1814.84 27424.30 00:12:05.407 [2024-11-27T06:14:24.937Z] =================================================================================================================== 00:12:05.407 [2024-11-27T06:14:24.937Z] Total : 6783.95 53.00 0.00 0.00 18813.73 1814.84 27424.30 00:12:05.670 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=82043 00:12:05.670 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:12:05.670 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:05.670 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:05.670 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:05.670 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:12:05.670 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:12:05.670 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:05.670 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:05.670 { 00:12:05.670 "params": { 00:12:05.670 "name": "Nvme$subsystem", 00:12:05.670 "trtype": "$TEST_TRANSPORT", 00:12:05.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:05.670 "adrfam": "ipv4", 00:12:05.670 "trsvcid": "$NVMF_PORT", 00:12:05.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:05.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:05.670 "hdgst": ${hdgst:-false}, 00:12:05.670 "ddgst": ${ddgst:-false} 00:12:05.670 }, 00:12:05.670 "method": "bdev_nvme_attach_controller" 00:12:05.670 } 00:12:05.670 EOF 00:12:05.670 )") 00:12:05.670 [2024-11-27 06:14:24.946663] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.670 [2024-11-27 06:14:24.946709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.670 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:12:05.670 2024/11/27 06:14:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.670 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:12:05.670 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:12:05.670 06:14:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:05.670 "params": { 00:12:05.670 "name": "Nvme1", 00:12:05.670 "trtype": "tcp", 00:12:05.670 "traddr": "10.0.0.3", 00:12:05.670 "adrfam": "ipv4", 00:12:05.670 "trsvcid": "4420", 00:12:05.670 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:05.670 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:05.670 "hdgst": false, 00:12:05.670 "ddgst": false 00:12:05.670 }, 00:12:05.670 "method": "bdev_nvme_attach_controller" 00:12:05.670 }' 00:12:05.670 [2024-11-27 06:14:24.954620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.670 [2024-11-27 06:14:24.954634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.670 2024/11/27 06:14:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.670 [2024-11-27 06:14:24.962621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.670 [2024-11-27 06:14:24.962640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.670 2024/11/27 06:14:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.670 [2024-11-27 06:14:24.970622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.670 [2024-11-27 06:14:24.970816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.670 2024/11/27 06:14:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.670 [2024-11-27 06:14:24.981801] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:12:05.670 [2024-11-27 06:14:24.982044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82043 ] 00:12:05.671 [2024-11-27 06:14:24.982650] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.671 [2024-11-27 06:14:24.982690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.671 2024/11/27 06:14:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.671 [2024-11-27 06:14:24.994645] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.671 [2024-11-27 06:14:24.994670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.671 2024/11/27 06:14:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.671 [2024-11-27 06:14:25.002632] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.671 [2024-11-27 06:14:25.002738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.671 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.671 [2024-11-27 06:14:25.010637] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.671 [2024-11-27 06:14:25.010732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.671 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.671 [2024-11-27 06:14:25.018640] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.671 [2024-11-27 06:14:25.018735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.671 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.671 [2024-11-27 06:14:25.026645] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.671 [2024-11-27 06:14:25.026736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.671 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.671 [2024-11-27 06:14:25.034648] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.671 [2024-11-27 06:14:25.034745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.671 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.671 [2024-11-27 06:14:25.046654] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.671 [2024-11-27 06:14:25.046757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.671 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.671 [2024-11-27 06:14:25.054649] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.671 [2024-11-27 06:14:25.054741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.671 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.671 [2024-11-27 06:14:25.062655] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.671 [2024-11-27 06:14:25.062751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.671 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.671 [2024-11-27 06:14:25.070657] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.671 [2024-11-27 06:14:25.070753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.671 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.671 [2024-11-27 06:14:25.082663] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.671 [2024-11-27 06:14:25.082762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.671 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.671 [2024-11-27 06:14:25.090661] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.671 [2024-11-27 06:14:25.090756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.671 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.671 [2024-11-27 06:14:25.097681] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:05.671 [2024-11-27 06:14:25.098673] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.671 [2024-11-27 06:14:25.098860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.671 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.671 [2024-11-27 06:14:25.110680] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.671 [2024-11-27 06:14:25.110789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.671 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.671 [2024-11-27 06:14:25.118671] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.671 [2024-11-27 06:14:25.118692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.671 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.671 [2024-11-27 06:14:25.126425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.671 [2024-11-27 06:14:25.126668] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.671 [2024-11-27 06:14:25.126676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.671 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.671 [2024-11-27 06:14:25.134669] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.671 [2024-11-27 06:14:25.134779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.671 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.671 [2024-11-27 06:14:25.142675] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.671 [2024-11-27 06:14:25.142784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.671 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.671 [2024-11-27 06:14:25.150679] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.671 [2024-11-27 06:14:25.150786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.671 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.671 [2024-11-27 06:14:25.158371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.671 [2024-11-27 06:14:25.162688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.671 [2024-11-27 06:14:25.162792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.671 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.671 [2024-11-27 06:14:25.170692] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.671 [2024-11-27 06:14:25.170796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.671 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.671 [2024-11-27 06:14:25.178685] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.671 [2024-11-27 06:14:25.178708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.671 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.671 [2024-11-27 06:14:25.186685] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.671 [2024-11-27 06:14:25.186706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.672 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.672 [2024-11-27 06:14:25.194692] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.672 [2024-11-27 06:14:25.194713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.935 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.935 [2024-11-27 06:14:25.202691] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.935 [2024-11-27 06:14:25.202718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.935 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.935 [2024-11-27 06:14:25.210696] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.935 [2024-11-27 06:14:25.210719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.935 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.935 [2024-11-27 06:14:25.218698] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.935 [2024-11-27 06:14:25.218724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.935 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.935 [2024-11-27 06:14:25.226700] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.935 [2024-11-27 06:14:25.226722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.935 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.935 [2024-11-27 06:14:25.238709] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.935 [2024-11-27 06:14:25.238735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.935 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.935 [2024-11-27 06:14:25.246705] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.935 [2024-11-27 06:14:25.246727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.935 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.936 [2024-11-27 06:14:25.254707] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.936 [2024-11-27 06:14:25.254730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.936 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.936 [2024-11-27 06:14:25.262709] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.936 [2024-11-27 06:14:25.262731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.936 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.936 [2024-11-27 06:14:25.274737] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.936 [2024-11-27 06:14:25.274772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.936 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.936 [2024-11-27 06:14:25.282726] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.936 [2024-11-27 06:14:25.282752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.936 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.936 [2024-11-27 06:14:25.290732] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.936 [2024-11-27 06:14:25.290760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.936 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.936 [2024-11-27 06:14:25.298734] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.936 [2024-11-27 06:14:25.298763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.936 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.936 [2024-11-27 06:14:25.306753] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.936 [2024-11-27 06:14:25.306780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.936 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.936 [2024-11-27 06:14:25.318745] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.936 [2024-11-27 06:14:25.318772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.936 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.936 [2024-11-27 06:14:25.326761] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.936 [2024-11-27 06:14:25.326797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.936 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.936 [2024-11-27 06:14:25.334744] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.936 [2024-11-27 06:14:25.334770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.936 Running I/O for 5 seconds... 00:12:05.936 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.936 [2024-11-27 06:14:25.342747] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.936 [2024-11-27 06:14:25.342769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.936 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.936 [2024-11-27 06:14:25.356228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.936 [2024-11-27 06:14:25.356266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.936 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.936 [2024-11-27 06:14:25.368561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.936 [2024-11-27 06:14:25.368599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.936 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.936 [2024-11-27 06:14:25.379170] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.936 [2024-11-27 06:14:25.379209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.936 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.936 [2024-11-27 06:14:25.387795] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.936 [2024-11-27 06:14:25.387832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.936 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.936 [2024-11-27 06:14:25.399465] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.936 [2024-11-27 06:14:25.399508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.936 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.936 [2024-11-27 06:14:25.408853] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.936 [2024-11-27 06:14:25.408893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.936 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.936 [2024-11-27 06:14:25.418654] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.936 [2024-11-27 06:14:25.418692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.936 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.936 [2024-11-27 06:14:25.432979] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.936 [2024-11-27 06:14:25.433019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.936 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.936 [2024-11-27 06:14:25.442079] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.936 [2024-11-27 06:14:25.442116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.936 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.936 [2024-11-27 06:14:25.451457] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.936 [2024-11-27 06:14:25.451497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.936 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:05.937 [2024-11-27 06:14:25.461024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.937 [2024-11-27 06:14:25.461061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.200 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.200 [2024-11-27 06:14:25.470782] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.200 [2024-11-27 06:14:25.470822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.200 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.200 [2024-11-27 06:14:25.484941] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.200 [2024-11-27 06:14:25.484982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.200 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.200 [2024-11-27 06:14:25.494194] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.200 [2024-11-27 06:14:25.494234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.200 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.200 [2024-11-27 06:14:25.503663] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.200 [2024-11-27 06:14:25.503705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.200 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.200 [2024-11-27 06:14:25.513262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.200 [2024-11-27 06:14:25.513299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.200 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.200 [2024-11-27 06:14:25.527001] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.200 [2024-11-27 06:14:25.527042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.200 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.200 [2024-11-27 06:14:25.536453] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.200 [2024-11-27 06:14:25.536515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.200 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.200 [2024-11-27 06:14:25.546313] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.200 [2024-11-27 06:14:25.546354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.200 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.200 [2024-11-27 06:14:25.556077] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.200 [2024-11-27 06:14:25.556119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.200 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.200 [2024-11-27 06:14:25.565733] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.200 [2024-11-27 06:14:25.565774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.200 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.200 [2024-11-27 06:14:25.579341] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.200 [2024-11-27 06:14:25.579380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.200 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.200 [2024-11-27 06:14:25.588418] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.200 [2024-11-27 06:14:25.588473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.200 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.200 [2024-11-27 06:14:25.598376] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.200 [2024-11-27 06:14:25.598417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.200 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.200 [2024-11-27 06:14:25.608153] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.200 [2024-11-27 06:14:25.608193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.200 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.200 [2024-11-27 06:14:25.617690] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.200 [2024-11-27 06:14:25.617729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.200 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.200 [2024-11-27 06:14:25.627492] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.200 [2024-11-27 06:14:25.627536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.200 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.200 [2024-11-27 06:14:25.637252] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.200 [2024-11-27 06:14:25.637294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.200 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.200 [2024-11-27 06:14:25.646948] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.200 [2024-11-27 06:14:25.646987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.200 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.200 [2024-11-27 06:14:25.656632] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.200 [2024-11-27 06:14:25.656669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.200 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.200 [2024-11-27 06:14:25.670687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.200 [2024-11-27 06:14:25.670731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.200 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.201 [2024-11-27 06:14:25.679178] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.201 [2024-11-27 06:14:25.679217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.201 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.201 [2024-11-27 06:14:25.691164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.201 [2024-11-27 06:14:25.691205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.201 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.201 [2024-11-27 06:14:25.702518] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.201 [2024-11-27 06:14:25.702559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.201 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.201 [2024-11-27 06:14:25.711432] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.201 [2024-11-27 06:14:25.711473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.201 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.201 [2024-11-27 06:14:25.725112] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.201 [2024-11-27 06:14:25.725149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.501 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.501 [2024-11-27 06:14:25.741707] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.501 [2024-11-27 06:14:25.741751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.501 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.501 [2024-11-27 06:14:25.752922] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.501 [2024-11-27 06:14:25.752963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.501 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.501 [2024-11-27 06:14:25.761588] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.501 [2024-11-27 06:14:25.761626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.501 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.501 [2024-11-27 06:14:25.773265] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.501 [2024-11-27 06:14:25.773304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.501 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.501 [2024-11-27 06:14:25.790120] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.501 [2024-11-27 06:14:25.790161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.501 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.501 [2024-11-27 06:14:25.801282] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.501 [2024-11-27 06:14:25.801322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.501 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.501 [2024-11-27 06:14:25.809410] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.502 [2024-11-27 06:14:25.809446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.502 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.502 [2024-11-27 06:14:25.821452] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.502 [2024-11-27 06:14:25.821489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.502 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.502 [2024-11-27 06:14:25.833321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.502 [2024-11-27 06:14:25.833362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.502 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.502 [2024-11-27 06:14:25.842448] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.502 [2024-11-27 06:14:25.842490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.502 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.502 [2024-11-27 06:14:25.852158] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.502 [2024-11-27 06:14:25.852198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.502 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.502 [2024-11-27 06:14:25.864602] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.502 [2024-11-27 06:14:25.864643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.502 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.502 [2024-11-27 06:14:25.873954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.502 [2024-11-27 06:14:25.873993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.502 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.502 [2024-11-27 06:14:25.883347] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.502 [2024-11-27 06:14:25.883387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.502 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.502 [2024-11-27 06:14:25.893096] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.502 [2024-11-27 06:14:25.893135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.502 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.502 [2024-11-27 06:14:25.902687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.502 [2024-11-27 06:14:25.902730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.502 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.502 [2024-11-27 06:14:25.912473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.502 [2024-11-27 06:14:25.912514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.502 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.502 [2024-11-27 06:14:25.922001] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.502 [2024-11-27 06:14:25.922041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.502 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.502 [2024-11-27 06:14:25.931735] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.502 [2024-11-27 06:14:25.931778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.502 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.502 [2024-11-27 06:14:25.941309] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.502 [2024-11-27 06:14:25.941350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.502 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.502 [2024-11-27 06:14:25.950831] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.502 [2024-11-27 06:14:25.950867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.502 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.502 [2024-11-27 06:14:25.960392] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.502 [2024-11-27 06:14:25.960430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.502 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.502 [2024-11-27 06:14:25.969801] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.502 [2024-11-27 06:14:25.969841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.502 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.502 [2024-11-27 06:14:25.979346] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.502 [2024-11-27 06:14:25.979386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.502 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.502 [2024-11-27 06:14:25.988955] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.502 [2024-11-27 06:14:25.988991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.502 2024/11/27 06:14:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.502 [2024-11-27 06:14:25.998662] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.502 [2024-11-27 06:14:25.998702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.502 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.502 [2024-11-27 06:14:26.008429] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.502 [2024-11-27 06:14:26.008483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.502 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.502 [2024-11-27 06:14:26.023330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.502 [2024-11-27 06:14:26.023371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.502 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.764 [2024-11-27 06:14:26.032115] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.764 [2024-11-27 06:14:26.032151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.764 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.764 [2024-11-27 06:14:26.044513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.764 [2024-11-27 06:14:26.044554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.764 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.764 [2024-11-27 06:14:26.055739] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.764 [2024-11-27 06:14:26.055783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.764 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.764 [2024-11-27 06:14:26.064425] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.764 [2024-11-27 06:14:26.064485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.764 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.764 [2024-11-27 06:14:26.074499] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.764 [2024-11-27 06:14:26.074542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.764 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.764 [2024-11-27 06:14:26.084136] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.764 [2024-11-27 06:14:26.084176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.764 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.764 [2024-11-27 06:14:26.093615] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.764 [2024-11-27 06:14:26.093656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.764 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.764 [2024-11-27 06:14:26.103211] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.764 [2024-11-27 06:14:26.103250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.764 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.764 [2024-11-27 06:14:26.113048] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.764 [2024-11-27 06:14:26.113085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.764 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.764 [2024-11-27 06:14:26.122855] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.764 [2024-11-27 06:14:26.122894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.764 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.764 [2024-11-27 06:14:26.132662] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.764 [2024-11-27 06:14:26.132700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.764 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.764 [2024-11-27 06:14:26.142263] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.764 [2024-11-27 06:14:26.142305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.764 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.764 [2024-11-27 06:14:26.152171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.764 [2024-11-27 06:14:26.152214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.764 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.764 [2024-11-27 06:14:26.161835] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.764 [2024-11-27 06:14:26.161874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.765 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.765 [2024-11-27 06:14:26.175602] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.765 [2024-11-27 06:14:26.175644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.765 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.765 [2024-11-27 06:14:26.184917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.765 [2024-11-27 06:14:26.184958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.765 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.765 [2024-11-27 06:14:26.194347] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.765 [2024-11-27 06:14:26.194387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.765 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.765 [2024-11-27 06:14:26.203900] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.765 [2024-11-27 06:14:26.203951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.765 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.765 [2024-11-27 06:14:26.213381] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.765 [2024-11-27 06:14:26.213421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.765 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.765 [2024-11-27 06:14:26.223184] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.765 [2024-11-27 06:14:26.223223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.765 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.765 [2024-11-27 06:14:26.232656] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.765 [2024-11-27 06:14:26.232697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.765 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.765 [2024-11-27 06:14:26.242328] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.765 [2024-11-27 06:14:26.242368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.765 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.765 [2024-11-27 06:14:26.252098] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.765 [2024-11-27 06:14:26.252137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.765 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.765 [2024-11-27 06:14:26.261794] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.765 [2024-11-27 06:14:26.261831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.765 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.765 [2024-11-27 06:14:26.271388] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.765 [2024-11-27 06:14:26.271429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.765 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.765 [2024-11-27 06:14:26.280936] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.765 [2024-11-27 06:14:26.280974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.765 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:06.765 [2024-11-27 06:14:26.290590] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.765 [2024-11-27 06:14:26.290630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.027 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.027 [2024-11-27 06:14:26.300014] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.027 [2024-11-27 06:14:26.300052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.027 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.027 [2024-11-27 06:14:26.309684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.027 [2024-11-27 06:14:26.309723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.027 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.027 [2024-11-27 06:14:26.319383] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.027 [2024-11-27 06:14:26.319423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.027 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.027 [2024-11-27 06:14:26.329116] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.027 [2024-11-27 06:14:26.329156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.027 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.027 13048.00 IOPS, 101.94 MiB/s [2024-11-27T06:14:26.557Z] [2024-11-27 06:14:26.338544] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.027 [2024-11-27 06:14:26.338584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.027 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.027 [2024-11-27 06:14:26.348309] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.027 [2024-11-27 06:14:26.348348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.027 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.027 [2024-11-27 06:14:26.357800] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.027 [2024-11-27 06:14:26.357839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.027 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.027 [2024-11-27 06:14:26.367348] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.027 [2024-11-27 06:14:26.367388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.027 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.027 [2024-11-27 06:14:26.376949] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.027 [2024-11-27 06:14:26.376987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.027 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.027 [2024-11-27 06:14:26.386602] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.027 [2024-11-27 06:14:26.386638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.027 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.027 [2024-11-27 06:14:26.396137] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.027 [2024-11-27 06:14:26.396175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.027 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.027 [2024-11-27 06:14:26.405703] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.027 [2024-11-27 06:14:26.405745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.027 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.027 [2024-11-27 06:14:26.415407] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.027 [2024-11-27 06:14:26.415448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.027 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.027 [2024-11-27 06:14:26.425185] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.027 [2024-11-27 06:14:26.425226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.027 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.027 [2024-11-27 06:14:26.434837] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.027 [2024-11-27 06:14:26.434880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.027 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.027 [2024-11-27 06:14:26.444737] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.027 [2024-11-27 06:14:26.444776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.027 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.027 [2024-11-27 06:14:26.454499] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.027 [2024-11-27 06:14:26.454542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.028 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.028 [2024-11-27 06:14:26.464102] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.028 [2024-11-27 06:14:26.464142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.028 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.028 [2024-11-27 06:14:26.478024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.028 [2024-11-27 06:14:26.478067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.028 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.028 [2024-11-27 06:14:26.486775] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.028 [2024-11-27 06:14:26.486815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.028 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.028 [2024-11-27 06:14:26.498516] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.028 [2024-11-27 06:14:26.498560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.028 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.028 [2024-11-27 06:14:26.507778] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.028 [2024-11-27 06:14:26.507819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.028 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.028 [2024-11-27 06:14:26.518564] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.028 [2024-11-27 06:14:26.518604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.028 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.028 [2024-11-27 06:14:26.529026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.028 [2024-11-27 06:14:26.529064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.028 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.028 [2024-11-27 06:14:26.537260] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.028 [2024-11-27 06:14:26.537299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.028 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.028 [2024-11-27 06:14:26.549042] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.028 [2024-11-27 06:14:26.549080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.028 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.290 [2024-11-27 06:14:26.558532] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.290 [2024-11-27 06:14:26.558569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.290 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.290 [2024-11-27 06:14:26.573617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.290 [2024-11-27 06:14:26.573660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.290 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.290 [2024-11-27 06:14:26.590058] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.290 [2024-11-27 06:14:26.590096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.290 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.290 [2024-11-27 06:14:26.607356] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.290 [2024-11-27 06:14:26.607404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.290 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.290 [2024-11-27 06:14:26.618973] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.290 [2024-11-27 06:14:26.619016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.290 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.290 [2024-11-27 06:14:26.628148] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.290 [2024-11-27 06:14:26.628186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.290 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.290 [2024-11-27 06:14:26.637926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.290 [2024-11-27 06:14:26.637969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.290 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.290 [2024-11-27 06:14:26.647566] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.290 [2024-11-27 06:14:26.647606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.290 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.290 [2024-11-27 06:14:26.657325] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.290 [2024-11-27 06:14:26.657368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.290 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.290 [2024-11-27 06:14:26.667078] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.290 [2024-11-27 06:14:26.667117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.290 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.290 [2024-11-27 06:14:26.676763] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.290 [2024-11-27 06:14:26.676804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.290 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.290 [2024-11-27 06:14:26.686584] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.290 [2024-11-27 06:14:26.686629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.290 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.290 [2024-11-27 06:14:26.696376] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.290 [2024-11-27 06:14:26.696417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.290 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.290 [2024-11-27 06:14:26.706127] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.290 [2024-11-27 06:14:26.706165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.290 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.290 [2024-11-27 06:14:26.715829] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.290 [2024-11-27 06:14:26.715869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.290 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.290 [2024-11-27 06:14:26.725371] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.290 [2024-11-27 06:14:26.725409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.290 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.290 [2024-11-27 06:14:26.738923] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.290 [2024-11-27 06:14:26.738968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.290 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.290 [2024-11-27 06:14:26.747449] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.290 [2024-11-27 06:14:26.747488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.290 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.290 [2024-11-27 06:14:26.759042] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.290 [2024-11-27 06:14:26.759084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.290 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.290 [2024-11-27 06:14:26.768534] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.290 [2024-11-27 06:14:26.768573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.290 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.290 [2024-11-27 06:14:26.779807] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.290 [2024-11-27 06:14:26.779849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.290 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.290 [2024-11-27 06:14:26.789057] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.290 [2024-11-27 06:14:26.789093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.290 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.290 [2024-11-27 06:14:26.798775] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.290 [2024-11-27 06:14:26.798815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.290 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.290 [2024-11-27 06:14:26.808143] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.290 [2024-11-27 06:14:26.808183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.291 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.291 [2024-11-27 06:14:26.817602] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.291 [2024-11-27 06:14:26.817640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.554 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.554 [2024-11-27 06:14:26.827345] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.554 [2024-11-27 06:14:26.827384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.554 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.554 [2024-11-27 06:14:26.837070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.554 [2024-11-27 06:14:26.837109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.554 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.554 [2024-11-27 06:14:26.846899] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.554 [2024-11-27 06:14:26.846951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.554 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.554 [2024-11-27 06:14:26.856452] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.554 [2024-11-27 06:14:26.856515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.554 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.554 [2024-11-27 06:14:26.865797] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.554 [2024-11-27 06:14:26.865835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.554 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.554 [2024-11-27 06:14:26.875273] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.554 [2024-11-27 06:14:26.875313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.554 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.554 [2024-11-27 06:14:26.884928] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.554 [2024-11-27 06:14:26.884966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.554 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.554 [2024-11-27 06:14:26.898796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.554 [2024-11-27 06:14:26.898836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.554 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.554 [2024-11-27 06:14:26.907952] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.554 [2024-11-27 06:14:26.907990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.554 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.554 [2024-11-27 06:14:26.917536] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.554 [2024-11-27 06:14:26.917576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.554 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.554 [2024-11-27 06:14:26.927077] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.554 [2024-11-27 06:14:26.927114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.554 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.554 [2024-11-27 06:14:26.936626] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.554 [2024-11-27 06:14:26.936665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.554 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.554 [2024-11-27 06:14:26.946146] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.554 [2024-11-27 06:14:26.946184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.554 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.554 [2024-11-27 06:14:26.960539] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.554 [2024-11-27 06:14:26.960576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.554 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.554 [2024-11-27 06:14:26.969920] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.554 [2024-11-27 06:14:26.969964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.554 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.554 [2024-11-27 06:14:26.979583] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.554 [2024-11-27 06:14:26.979624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.554 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.554 [2024-11-27 06:14:26.989293] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.554 [2024-11-27 06:14:26.989335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.555 2024/11/27 06:14:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.555 [2024-11-27 06:14:26.999044] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.555 [2024-11-27 06:14:26.999085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.555 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.555 [2024-11-27 06:14:27.008671] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.555 [2024-11-27 06:14:27.008711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.555 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.555 [2024-11-27 06:14:27.018391] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.555 [2024-11-27 06:14:27.018434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.555 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.555 [2024-11-27 06:14:27.028107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.555 [2024-11-27 06:14:27.028148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.555 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.555 [2024-11-27 06:14:27.037990] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.555 [2024-11-27 06:14:27.038028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.555 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.555 [2024-11-27 06:14:27.047671] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.555 [2024-11-27 06:14:27.047711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.555 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.555 [2024-11-27 06:14:27.057227] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.555 [2024-11-27 06:14:27.057268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.555 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.555 [2024-11-27 06:14:27.066629] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.555 [2024-11-27 06:14:27.066671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.555 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.555 [2024-11-27 06:14:27.076239] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.555 [2024-11-27 06:14:27.076281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.555 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.816 [2024-11-27 06:14:27.085804] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.816 [2024-11-27 06:14:27.085844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.816 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.816 [2024-11-27 06:14:27.095526] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.816 [2024-11-27 06:14:27.095566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.816 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.816 [2024-11-27 06:14:27.105121] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.816 [2024-11-27 06:14:27.105160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.816 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.816 [2024-11-27 06:14:27.114789] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.816 [2024-11-27 06:14:27.114831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.816 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.816 [2024-11-27 06:14:27.124422] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.816 [2024-11-27 06:14:27.124479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.816 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.816 [2024-11-27 06:14:27.134171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.816 [2024-11-27 06:14:27.134212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.816 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.816 [2024-11-27 06:14:27.143810] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.816 [2024-11-27 06:14:27.143851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.816 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.816 [2024-11-27 06:14:27.153312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.816 [2024-11-27 06:14:27.153351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.816 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.816 [2024-11-27 06:14:27.163098] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.816 [2024-11-27 06:14:27.163140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.817 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.817 [2024-11-27 06:14:27.172892] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.817 [2024-11-27 06:14:27.172942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.817 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.817 [2024-11-27 06:14:27.182510] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.817 [2024-11-27 06:14:27.182549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.817 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.817 [2024-11-27 06:14:27.192210] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.817 [2024-11-27 06:14:27.192250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.817 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.817 [2024-11-27 06:14:27.201900] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.817 [2024-11-27 06:14:27.201951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.817 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.817 [2024-11-27 06:14:27.211537] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.817 [2024-11-27 06:14:27.211579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.817 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.817 [2024-11-27 06:14:27.221413] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.817 [2024-11-27 06:14:27.221451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.817 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.817 [2024-11-27 06:14:27.231200] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.817 [2024-11-27 06:14:27.231237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.817 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.817 [2024-11-27 06:14:27.244681] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.817 [2024-11-27 06:14:27.244722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.817 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.817 [2024-11-27 06:14:27.255388] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.817 [2024-11-27 06:14:27.255428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.817 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.817 [2024-11-27 06:14:27.263436] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.817 [2024-11-27 06:14:27.263475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.817 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.817 [2024-11-27 06:14:27.275152] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.817 [2024-11-27 06:14:27.275193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.817 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.817 [2024-11-27 06:14:27.284580] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.817 [2024-11-27 06:14:27.284617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.817 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.817 [2024-11-27 06:14:27.295620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.817 [2024-11-27 06:14:27.295663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.817 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.817 [2024-11-27 06:14:27.304814] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.817 [2024-11-27 06:14:27.304855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.817 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.817 [2024-11-27 06:14:27.314278] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.817 [2024-11-27 06:14:27.314319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.817 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.817 [2024-11-27 06:14:27.323848] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.817 [2024-11-27 06:14:27.323889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.817 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.817 [2024-11-27 06:14:27.333447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.817 [2024-11-27 06:14:27.333489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.817 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:07.817 13103.00 IOPS, 102.37 MiB/s [2024-11-27T06:14:27.347Z] [2024-11-27 06:14:27.343135] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.817 [2024-11-27 06:14:27.343178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.079 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.079 [2024-11-27 06:14:27.352894] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.079 [2024-11-27 06:14:27.352949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.079 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.079 [2024-11-27 06:14:27.362669] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.079 [2024-11-27 06:14:27.362712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.079 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.079 [2024-11-27 06:14:27.372075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.079 [2024-11-27 06:14:27.372117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.079 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.079 [2024-11-27 06:14:27.381709] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.079 [2024-11-27 06:14:27.381750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.079 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.079 [2024-11-27 06:14:27.391522] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.079 [2024-11-27 06:14:27.391564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.079 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.079 [2024-11-27 06:14:27.400956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.079 [2024-11-27 06:14:27.400998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.079 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.079 [2024-11-27 06:14:27.410617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.079 [2024-11-27 06:14:27.410659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.079 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.079 [2024-11-27 06:14:27.420104] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.079 [2024-11-27 06:14:27.420146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.079 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.079 [2024-11-27 06:14:27.429596] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.079 [2024-11-27 06:14:27.429639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.079 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.079 [2024-11-27 06:14:27.439242] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.079 [2024-11-27 06:14:27.439284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.079 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.079 [2024-11-27 06:14:27.448991] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.079 [2024-11-27 06:14:27.449035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.079 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.079 [2024-11-27 06:14:27.458786] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.079 [2024-11-27 06:14:27.458829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.079 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.079 [2024-11-27 06:14:27.468607] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.079 [2024-11-27 06:14:27.468648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.079 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.079 [2024-11-27 06:14:27.478219] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.079 [2024-11-27 06:14:27.478261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.079 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.079 [2024-11-27 06:14:27.487800] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.079 [2024-11-27 06:14:27.487841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.079 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.079 [2024-11-27 06:14:27.497287] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.079 [2024-11-27 06:14:27.497327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.079 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.079 [2024-11-27 06:14:27.506891] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.079 [2024-11-27 06:14:27.506943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.079 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.079 [2024-11-27 06:14:27.516447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.079 [2024-11-27 06:14:27.516508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.079 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.079 [2024-11-27 06:14:27.526396] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.079 [2024-11-27 06:14:27.526437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.079 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.079 [2024-11-27 06:14:27.536008] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.079 [2024-11-27 06:14:27.536050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.079 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.079 [2024-11-27 06:14:27.545717] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.079 [2024-11-27 06:14:27.545757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.079 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.079 [2024-11-27 06:14:27.555362] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.079 [2024-11-27 06:14:27.555403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.079 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.079 [2024-11-27 06:14:27.564835] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.079 [2024-11-27 06:14:27.564874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.080 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.080 [2024-11-27 06:14:27.574407] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.080 [2024-11-27 06:14:27.574448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.080 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.080 [2024-11-27 06:14:27.583973] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.080 [2024-11-27 06:14:27.584011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.080 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.080 [2024-11-27 06:14:27.593702] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.080 [2024-11-27 06:14:27.593743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.080 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.080 [2024-11-27 06:14:27.603378] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.080 [2024-11-27 06:14:27.603416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.080 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.343 [2024-11-27 06:14:27.612820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.343 [2024-11-27 06:14:27.612859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.343 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.343 [2024-11-27 06:14:27.622440] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.343 [2024-11-27 06:14:27.622482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.343 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.343 [2024-11-27 06:14:27.636439] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.343 [2024-11-27 06:14:27.636500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.343 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.343 [2024-11-27 06:14:27.644504] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.343 [2024-11-27 06:14:27.644543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.343 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.343 [2024-11-27 06:14:27.660549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.343 [2024-11-27 06:14:27.660589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.343 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.343 [2024-11-27 06:14:27.669873] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.343 [2024-11-27 06:14:27.669927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.343 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.343 [2024-11-27 06:14:27.681031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.343 [2024-11-27 06:14:27.681072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.343 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.343 [2024-11-27 06:14:27.689869] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.343 [2024-11-27 06:14:27.689924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.343 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.343 [2024-11-27 06:14:27.699728] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.343 [2024-11-27 06:14:27.699768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.343 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.343 [2024-11-27 06:14:27.709300] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.343 [2024-11-27 06:14:27.709342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.343 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.344 [2024-11-27 06:14:27.719034] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.344 [2024-11-27 06:14:27.719074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.344 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.344 [2024-11-27 06:14:27.728553] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.344 [2024-11-27 06:14:27.728593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.344 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.344 [2024-11-27 06:14:27.738177] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.344 [2024-11-27 06:14:27.738218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.344 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.344 [2024-11-27 06:14:27.747949] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.344 [2024-11-27 06:14:27.747991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.344 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.344 [2024-11-27 06:14:27.757396] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.344 [2024-11-27 06:14:27.757436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.344 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.344 [2024-11-27 06:14:27.766757] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.344 [2024-11-27 06:14:27.766799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.344 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.344 [2024-11-27 06:14:27.776168] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.344 [2024-11-27 06:14:27.776207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.344 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.344 [2024-11-27 06:14:27.785696] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.344 [2024-11-27 06:14:27.785740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.344 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.344 [2024-11-27 06:14:27.795456] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.344 [2024-11-27 06:14:27.795499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.344 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.344 [2024-11-27 06:14:27.805503] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.344 [2024-11-27 06:14:27.805545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.344 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.344 [2024-11-27 06:14:27.815388] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.344 [2024-11-27 06:14:27.815430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.344 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.344 [2024-11-27 06:14:27.824995] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.344 [2024-11-27 06:14:27.825036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.344 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.344 [2024-11-27 06:14:27.834887] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.344 [2024-11-27 06:14:27.834941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.344 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.344 [2024-11-27 06:14:27.844695] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.344 [2024-11-27 06:14:27.844736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.344 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.344 [2024-11-27 06:14:27.854391] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.344 [2024-11-27 06:14:27.854434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.344 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.344 [2024-11-27 06:14:27.864179] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.344 [2024-11-27 06:14:27.864219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.344 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.606 [2024-11-27 06:14:27.873984] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.606 [2024-11-27 06:14:27.874028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.606 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.606 [2024-11-27 06:14:27.883681] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.606 [2024-11-27 06:14:27.883721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.606 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.606 [2024-11-27 06:14:27.893172] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.606 [2024-11-27 06:14:27.893211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.606 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.606 [2024-11-27 06:14:27.902664] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.606 [2024-11-27 06:14:27.902702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.606 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.606 [2024-11-27 06:14:27.912383] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.606 [2024-11-27 06:14:27.912422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.606 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.606 [2024-11-27 06:14:27.921878] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.606 [2024-11-27 06:14:27.921931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.606 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.606 [2024-11-27 06:14:27.931429] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.606 [2024-11-27 06:14:27.931470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.606 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.606 [2024-11-27 06:14:27.941036] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.606 [2024-11-27 06:14:27.941072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.606 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.606 [2024-11-27 06:14:27.950488] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.606 [2024-11-27 06:14:27.950527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.606 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.607 [2024-11-27 06:14:27.959816] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.607 [2024-11-27 06:14:27.959856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.607 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.607 [2024-11-27 06:14:27.969314] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.607 [2024-11-27 06:14:27.969353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.607 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.607 [2024-11-27 06:14:27.978829] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.607 [2024-11-27 06:14:27.978866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.607 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.607 [2024-11-27 06:14:27.988133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.607 [2024-11-27 06:14:27.988172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.607 2024/11/27 06:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.607 [2024-11-27 06:14:27.997847] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.607 [2024-11-27 06:14:27.997886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.607 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.607 [2024-11-27 06:14:28.007672] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.607 [2024-11-27 06:14:28.007711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.607 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.607 [2024-11-27 06:14:28.017378] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.607 [2024-11-27 06:14:28.017416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.607 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.607 [2024-11-27 06:14:28.026886] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.607 [2024-11-27 06:14:28.026937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.607 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.607 [2024-11-27 06:14:28.036507] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.607 [2024-11-27 06:14:28.036547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.607 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.607 [2024-11-27 06:14:28.046002] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.607 [2024-11-27 06:14:28.046043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.607 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.607 [2024-11-27 06:14:28.055499] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.607 [2024-11-27 06:14:28.055541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.607 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.607 [2024-11-27 06:14:28.065034] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.607 [2024-11-27 06:14:28.065072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.607 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.607 [2024-11-27 06:14:28.074961] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.607 [2024-11-27 06:14:28.075002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.607 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.607 [2024-11-27 06:14:28.084704] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.607 [2024-11-27 06:14:28.084744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.607 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.607 [2024-11-27 06:14:28.094420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.607 [2024-11-27 06:14:28.094460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.607 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.607 [2024-11-27 06:14:28.103990] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.607 [2024-11-27 06:14:28.104030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.607 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.607 [2024-11-27 06:14:28.113771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.607 [2024-11-27 06:14:28.113811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.607 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.607 [2024-11-27 06:14:28.123515] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.607 [2024-11-27 06:14:28.123557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.607 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.607 [2024-11-27 06:14:28.133291] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.607 [2024-11-27 06:14:28.133331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.868 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.868 [2024-11-27 06:14:28.142982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.868 [2024-11-27 06:14:28.143023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.868 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.868 [2024-11-27 06:14:28.152622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.868 [2024-11-27 06:14:28.152660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.868 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.868 [2024-11-27 06:14:28.162182] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.868 [2024-11-27 06:14:28.162222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.868 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.868 [2024-11-27 06:14:28.172171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.868 [2024-11-27 06:14:28.172215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.868 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.868 [2024-11-27 06:14:28.182103] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.868 [2024-11-27 06:14:28.182143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.868 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.868 [2024-11-27 06:14:28.191946] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.868 [2024-11-27 06:14:28.191984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.868 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.868 [2024-11-27 06:14:28.201473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.868 [2024-11-27 06:14:28.201511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.868 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.868 [2024-11-27 06:14:28.222678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.868 [2024-11-27 06:14:28.222719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.868 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.868 [2024-11-27 06:14:28.232213] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.868 [2024-11-27 06:14:28.232251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.868 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.868 [2024-11-27 06:14:28.241800] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.868 [2024-11-27 06:14:28.241839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.868 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.868 [2024-11-27 06:14:28.256366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.868 [2024-11-27 06:14:28.256400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.868 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.868 [2024-11-27 06:14:28.272937] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.868 [2024-11-27 06:14:28.272979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.868 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.868 [2024-11-27 06:14:28.289322] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.868 [2024-11-27 06:14:28.289361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.868 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.868 [2024-11-27 06:14:28.305784] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.868 [2024-11-27 06:14:28.305825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.868 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.868 [2024-11-27 06:14:28.323278] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.868 [2024-11-27 06:14:28.323316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.868 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.868 13138.00 IOPS, 102.64 MiB/s [2024-11-27T06:14:28.398Z] [2024-11-27 06:14:28.338530] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.868 [2024-11-27 06:14:28.338573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.868 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.868 [2024-11-27 06:14:28.354380] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.868 [2024-11-27 06:14:28.354422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.868 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.868 [2024-11-27 06:14:28.365763] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.868 [2024-11-27 06:14:28.365803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.868 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.868 [2024-11-27 06:14:28.374437] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.868 [2024-11-27 06:14:28.374477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.868 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.868 [2024-11-27 06:14:28.385847] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.869 [2024-11-27 06:14:28.385887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.869 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:08.869 [2024-11-27 06:14:28.395071] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.869 [2024-11-27 06:14:28.395112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.131 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.131 [2024-11-27 06:14:28.406976] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.131 [2024-11-27 06:14:28.407014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.131 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.131 [2024-11-27 06:14:28.423870] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.131 [2024-11-27 06:14:28.423928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.131 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.131 [2024-11-27 06:14:28.439395] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.131 [2024-11-27 06:14:28.439438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.131 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.131 [2024-11-27 06:14:28.451858] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.131 [2024-11-27 06:14:28.451899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.131 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.131 [2024-11-27 06:14:28.469827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.131 [2024-11-27 06:14:28.469868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.131 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.131 [2024-11-27 06:14:28.486477] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.131 [2024-11-27 06:14:28.486521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.131 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.131 [2024-11-27 06:14:28.498446] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.131 [2024-11-27 06:14:28.498487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.131 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.131 [2024-11-27 06:14:28.507771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.131 [2024-11-27 06:14:28.507810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.131 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.131 [2024-11-27 06:14:28.521623] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.131 [2024-11-27 06:14:28.521668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.131 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.131 [2024-11-27 06:14:28.537592] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.131 [2024-11-27 06:14:28.537633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.131 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.131 [2024-11-27 06:14:28.549705] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.131 [2024-11-27 06:14:28.549744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.131 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.131 [2024-11-27 06:14:28.566476] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.131 [2024-11-27 06:14:28.566517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.131 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.131 [2024-11-27 06:14:28.583008] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.131 [2024-11-27 06:14:28.583045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.131 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.131 [2024-11-27 06:14:28.595131] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.131 [2024-11-27 06:14:28.595168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.131 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.131 [2024-11-27 06:14:28.603821] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.131 [2024-11-27 06:14:28.603859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.131 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.131 [2024-11-27 06:14:28.615362] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.131 [2024-11-27 06:14:28.615403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.131 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.131 [2024-11-27 06:14:28.631728] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.131 [2024-11-27 06:14:28.631770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.132 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.132 [2024-11-27 06:14:28.649306] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.132 [2024-11-27 06:14:28.649349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.132 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.393 [2024-11-27 06:14:28.665747] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.393 [2024-11-27 06:14:28.665791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.393 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.393 [2024-11-27 06:14:28.677023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.393 [2024-11-27 06:14:28.677059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.393 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.393 [2024-11-27 06:14:28.694365] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.393 [2024-11-27 06:14:28.694409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.393 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.393 [2024-11-27 06:14:28.705748] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.393 [2024-11-27 06:14:28.705791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.393 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.393 [2024-11-27 06:14:28.723460] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.393 [2024-11-27 06:14:28.723501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.393 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.393 [2024-11-27 06:14:28.733262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.393 [2024-11-27 06:14:28.733306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.393 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.393 [2024-11-27 06:14:28.747202] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.393 [2024-11-27 06:14:28.747242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.393 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.393 [2024-11-27 06:14:28.763886] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.393 [2024-11-27 06:14:28.763944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.393 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.393 [2024-11-27 06:14:28.780269] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.393 [2024-11-27 06:14:28.780314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.393 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.393 [2024-11-27 06:14:28.791990] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.393 [2024-11-27 06:14:28.792034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.393 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.393 [2024-11-27 06:14:28.808367] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.393 [2024-11-27 06:14:28.808410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.393 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.393 [2024-11-27 06:14:28.825528] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.393 [2024-11-27 06:14:28.825574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.393 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.393 [2024-11-27 06:14:28.837089] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.393 [2024-11-27 06:14:28.837135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.393 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.393 [2024-11-27 06:14:28.854325] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.393 [2024-11-27 06:14:28.854374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.393 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.393 [2024-11-27 06:14:28.869360] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.393 [2024-11-27 06:14:28.869407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.393 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.393 [2024-11-27 06:14:28.886520] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.393 [2024-11-27 06:14:28.886567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.393 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.393 [2024-11-27 06:14:28.901578] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.393 [2024-11-27 06:14:28.901626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.393 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.393 [2024-11-27 06:14:28.919578] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.393 [2024-11-27 06:14:28.919623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.655 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.655 [2024-11-27 06:14:28.933703] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.655 [2024-11-27 06:14:28.933748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.655 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.655 [2024-11-27 06:14:28.943073] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.655 [2024-11-27 06:14:28.943112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.655 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.655 [2024-11-27 06:14:28.958170] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.655 [2024-11-27 06:14:28.958212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.655 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.655 [2024-11-27 06:14:28.970450] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.655 [2024-11-27 06:14:28.970496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.655 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.655 [2024-11-27 06:14:28.987547] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.655 [2024-11-27 06:14:28.987591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.655 2024/11/27 06:14:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.655 [2024-11-27 06:14:29.003103] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.655 [2024-11-27 06:14:29.003148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.655 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.655 [2024-11-27 06:14:29.015074] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.655 [2024-11-27 06:14:29.015119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.655 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.655 [2024-11-27 06:14:29.031574] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.655 [2024-11-27 06:14:29.031618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.655 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.655 [2024-11-27 06:14:29.048354] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.655 [2024-11-27 06:14:29.048396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.655 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.655 [2024-11-27 06:14:29.059610] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.655 [2024-11-27 06:14:29.059651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.655 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.655 [2024-11-27 06:14:29.076492] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.655 [2024-11-27 06:14:29.076542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.655 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.655 [2024-11-27 06:14:29.092144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.655 [2024-11-27 06:14:29.092184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.655 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.655 [2024-11-27 06:14:29.110352] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.655 [2024-11-27 06:14:29.110397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.655 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.655 [2024-11-27 06:14:29.126639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.655 [2024-11-27 06:14:29.126683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.655 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.655 [2024-11-27 06:14:29.144139] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.655 [2024-11-27 06:14:29.144181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.655 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.655 [2024-11-27 06:14:29.161458] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.655 [2024-11-27 06:14:29.161497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.655 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.656 [2024-11-27 06:14:29.178092] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.656 [2024-11-27 06:14:29.178136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.656 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.916 [2024-11-27 06:14:29.189501] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.916 [2024-11-27 06:14:29.189541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.916 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.916 [2024-11-27 06:14:29.206984] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.916 [2024-11-27 06:14:29.207024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.916 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.916 [2024-11-27 06:14:29.220816] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.916 [2024-11-27 06:14:29.220856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.916 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.916 [2024-11-27 06:14:29.237264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.916 [2024-11-27 06:14:29.237315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.916 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.916 [2024-11-27 06:14:29.254954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.916 [2024-11-27 06:14:29.254997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.917 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.917 [2024-11-27 06:14:29.272113] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.917 [2024-11-27 06:14:29.272159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.917 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.917 [2024-11-27 06:14:29.284727] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.917 [2024-11-27 06:14:29.284771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.917 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.917 [2024-11-27 06:14:29.293744] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.917 [2024-11-27 06:14:29.293784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.917 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.917 [2024-11-27 06:14:29.304423] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.917 [2024-11-27 06:14:29.304500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.917 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.917 [2024-11-27 06:14:29.319239] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.917 [2024-11-27 06:14:29.319288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.917 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.917 [2024-11-27 06:14:29.331097] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.917 [2024-11-27 06:14:29.331138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.917 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.917 13115.50 IOPS, 102.46 MiB/s [2024-11-27T06:14:29.447Z] [2024-11-27 06:14:29.340240] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.917 [2024-11-27 06:14:29.340276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.917 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.917 [2024-11-27 06:14:29.353986] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.917 [2024-11-27 06:14:29.354026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.917 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.917 [2024-11-27 06:14:29.370246] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.917 [2024-11-27 06:14:29.370288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.917 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.917 [2024-11-27 06:14:29.386899] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.917 [2024-11-27 06:14:29.386955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.917 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.917 [2024-11-27 06:14:29.403224] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.917 [2024-11-27 06:14:29.403267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.917 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.917 [2024-11-27 06:14:29.420606] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.917 [2024-11-27 06:14:29.420650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.917 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:09.917 [2024-11-27 06:14:29.436936] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.917 [2024-11-27 06:14:29.436980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.917 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.178 [2024-11-27 06:14:29.454124] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.179 [2024-11-27 06:14:29.454168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.179 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.179 [2024-11-27 06:14:29.470366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.179 [2024-11-27 06:14:29.470412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.179 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.179 [2024-11-27 06:14:29.487442] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.179 [2024-11-27 06:14:29.487487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.179 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.179 [2024-11-27 06:14:29.504175] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.179 [2024-11-27 06:14:29.504220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.179 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.179 [2024-11-27 06:14:29.515769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.179 [2024-11-27 06:14:29.515810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.179 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.179 [2024-11-27 06:14:29.525211] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.179 [2024-11-27 06:14:29.525249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.179 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.179 [2024-11-27 06:14:29.534794] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.179 [2024-11-27 06:14:29.534832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.179 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.179 [2024-11-27 06:14:29.549524] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.179 [2024-11-27 06:14:29.549562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.179 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.179 [2024-11-27 06:14:29.561315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.179 [2024-11-27 06:14:29.561353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.179 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.179 [2024-11-27 06:14:29.578094] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.179 [2024-11-27 06:14:29.578131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.179 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.179 [2024-11-27 06:14:29.594397] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.179 [2024-11-27 06:14:29.594436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.179 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.179 [2024-11-27 06:14:29.611687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.179 [2024-11-27 06:14:29.611728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.179 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.179 [2024-11-27 06:14:29.622729] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.179 [2024-11-27 06:14:29.622769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.179 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.179 [2024-11-27 06:14:29.638926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.179 [2024-11-27 06:14:29.638972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.179 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.179 [2024-11-27 06:14:29.655641] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.179 [2024-11-27 06:14:29.655681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.179 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.179 [2024-11-27 06:14:29.671554] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.179 [2024-11-27 06:14:29.671594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.179 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.179 [2024-11-27 06:14:29.683738] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.179 [2024-11-27 06:14:29.683780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.179 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.179 [2024-11-27 06:14:29.699813] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.179 [2024-11-27 06:14:29.699852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.179 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.442 [2024-11-27 06:14:29.716720] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.442 [2024-11-27 06:14:29.716763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.442 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.442 [2024-11-27 06:14:29.733093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.442 [2024-11-27 06:14:29.733136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.442 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.442 [2024-11-27 06:14:29.750060] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.442 [2024-11-27 06:14:29.750100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.442 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.442 [2024-11-27 06:14:29.766344] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.442 [2024-11-27 06:14:29.766387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.442 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.442 [2024-11-27 06:14:29.783642] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.442 [2024-11-27 06:14:29.783683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.442 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.442 [2024-11-27 06:14:29.800141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.442 [2024-11-27 06:14:29.800194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.442 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.442 [2024-11-27 06:14:29.817217] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.442 [2024-11-27 06:14:29.817258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.442 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.442 [2024-11-27 06:14:29.828437] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.442 [2024-11-27 06:14:29.828490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.442 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.442 [2024-11-27 06:14:29.844643] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.442 [2024-11-27 06:14:29.844683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.442 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.442 [2024-11-27 06:14:29.862086] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.442 [2024-11-27 06:14:29.862140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.442 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.442 [2024-11-27 06:14:29.878684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.442 [2024-11-27 06:14:29.878725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.442 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.442 [2024-11-27 06:14:29.890496] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.442 [2024-11-27 06:14:29.890538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.442 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.442 [2024-11-27 06:14:29.899748] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.442 [2024-11-27 06:14:29.899786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.442 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.442 [2024-11-27 06:14:29.909326] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.442 [2024-11-27 06:14:29.909366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.442 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.442 [2024-11-27 06:14:29.919176] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.442 [2024-11-27 06:14:29.919217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.442 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.442 [2024-11-27 06:14:29.933118] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.442 [2024-11-27 06:14:29.933157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.442 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.442 [2024-11-27 06:14:29.942235] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.442 [2024-11-27 06:14:29.942273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.442 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.442 [2024-11-27 06:14:29.951740] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.442 [2024-11-27 06:14:29.951779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.442 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.442 [2024-11-27 06:14:29.961447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.442 [2024-11-27 06:14:29.961485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.443 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.706 [2024-11-27 06:14:29.975568] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.706 [2024-11-27 06:14:29.975607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.706 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.706 [2024-11-27 06:14:29.992251] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.706 [2024-11-27 06:14:29.992288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.706 2024/11/27 06:14:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.706 [2024-11-27 06:14:30.008572] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.706 [2024-11-27 06:14:30.008608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.706 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.706 [2024-11-27 06:14:30.020967] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.706 [2024-11-27 06:14:30.021007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.706 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.706 [2024-11-27 06:14:30.036764] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.706 [2024-11-27 06:14:30.036805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.706 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.706 [2024-11-27 06:14:30.054471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.706 [2024-11-27 06:14:30.054512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.706 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.706 [2024-11-27 06:14:30.064149] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.706 [2024-11-27 06:14:30.064188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.706 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.706 [2024-11-27 06:14:30.073751] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.706 [2024-11-27 06:14:30.073789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.706 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.706 [2024-11-27 06:14:30.087816] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.706 [2024-11-27 06:14:30.087854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.706 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.706 [2024-11-27 06:14:30.105371] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.706 [2024-11-27 06:14:30.105409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.706 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.706 [2024-11-27 06:14:30.121814] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.706 [2024-11-27 06:14:30.121852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.706 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.706 [2024-11-27 06:14:30.139322] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.706 [2024-11-27 06:14:30.139365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.706 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.706 [2024-11-27 06:14:30.151162] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.706 [2024-11-27 06:14:30.151202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.706 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.706 [2024-11-27 06:14:30.167064] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.706 [2024-11-27 06:14:30.167104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.706 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.706 [2024-11-27 06:14:30.178218] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.706 [2024-11-27 06:14:30.178256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.706 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.706 [2024-11-27 06:14:30.194920] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.706 [2024-11-27 06:14:30.194958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.707 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.707 [2024-11-27 06:14:30.207194] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.707 [2024-11-27 06:14:30.207234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.707 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.707 [2024-11-27 06:14:30.224413] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.707 [2024-11-27 06:14:30.224451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.707 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.968 [2024-11-27 06:14:30.241858] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.968 [2024-11-27 06:14:30.241896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.968 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.968 [2024-11-27 06:14:30.253133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.968 [2024-11-27 06:14:30.253172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.968 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.968 [2024-11-27 06:14:30.268994] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.968 [2024-11-27 06:14:30.269025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.968 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.968 [2024-11-27 06:14:30.286447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.968 [2024-11-27 06:14:30.286487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.968 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.968 [2024-11-27 06:14:30.298329] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.968 [2024-11-27 06:14:30.298367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.968 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.968 [2024-11-27 06:14:30.310212] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.968 [2024-11-27 06:14:30.310248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.968 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.968 [2024-11-27 06:14:30.327171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.968 [2024-11-27 06:14:30.327212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.968 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.968 13137.40 IOPS, 102.64 MiB/s [2024-11-27T06:14:30.498Z] [2024-11-27 06:14:30.342289] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.968 [2024-11-27 06:14:30.342328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.968 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.968 00:12:10.968 Latency(us) 00:12:10.968 [2024-11-27T06:14:30.498Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:10.968 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:10.968 Nvme1n1 : 5.01 13140.98 102.66 0.00 0.00 9729.48 4486.70 19055.85 00:12:10.968 [2024-11-27T06:14:30.498Z] =================================================================================================================== 00:12:10.968 [2024-11-27T06:14:30.498Z] Total : 13140.98 102.66 0.00 0.00 9729.48 4486.70 19055.85 00:12:10.968 [2024-11-27 06:14:30.351549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.968 [2024-11-27 06:14:30.351584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.968 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.968 [2024-11-27 06:14:30.359549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.968 [2024-11-27 06:14:30.359578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.969 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.969 [2024-11-27 06:14:30.371538] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.969 [2024-11-27 06:14:30.371565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.969 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.969 [2024-11-27 06:14:30.383539] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.969 [2024-11-27 06:14:30.383570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.969 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.969 [2024-11-27 06:14:30.395541] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.969 [2024-11-27 06:14:30.395570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.969 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.969 [2024-11-27 06:14:30.407543] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.969 [2024-11-27 06:14:30.407570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.969 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.969 [2024-11-27 06:14:30.419546] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.969 [2024-11-27 06:14:30.419573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.969 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.969 [2024-11-27 06:14:30.431551] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.969 [2024-11-27 06:14:30.431577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.969 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.969 [2024-11-27 06:14:30.443556] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.969 [2024-11-27 06:14:30.443584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.969 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.969 [2024-11-27 06:14:30.455560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.969 [2024-11-27 06:14:30.455588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.969 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.969 [2024-11-27 06:14:30.467561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.969 [2024-11-27 06:14:30.467587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.969 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.969 [2024-11-27 06:14:30.479566] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.969 [2024-11-27 06:14:30.479592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.969 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:10.969 [2024-11-27 06:14:30.491582] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:10.969 [2024-11-27 06:14:30.491608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.969 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:11.230 [2024-11-27 06:14:30.503582] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.230 [2024-11-27 06:14:30.503609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.230 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:11.230 [2024-11-27 06:14:30.515581] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.230 [2024-11-27 06:14:30.515604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.230 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:11.230 [2024-11-27 06:14:30.527584] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:11.230 [2024-11-27 06:14:30.527609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:11.230 2024/11/27 06:14:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:11.230 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (82043) - No such process 00:12:11.230 06:14:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 82043 00:12:11.230 06:14:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:11.230 06:14:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.230 06:14:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:11.230 06:14:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.230 06:14:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:11.230 06:14:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.230 06:14:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:11.230 delay0 00:12:11.230 06:14:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.230 06:14:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:11.230 06:14:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.230 06:14:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:11.230 06:14:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.230 06:14:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:12:11.230 [2024-11-27 06:14:30.725457] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:17.819 Initializing NVMe Controllers 00:12:17.819 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:12:17.819 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:17.819 Initialization complete. Launching workers. 00:12:17.819 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 834 00:12:17.819 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1121, failed to submit 33 00:12:17.819 success 942, unsuccessful 179, failed 0 00:12:17.819 06:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:12:17.819 06:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:12:17.819 06:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:17.819 06:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:12:17.819 06:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:17.819 06:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:12:17.819 06:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:17.819 06:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:17.819 rmmod nvme_tcp 00:12:17.819 rmmod nvme_fabrics 00:12:17.819 rmmod nvme_keyring 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 81883 ']' 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 81883 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 81883 ']' 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 81883 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81883 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:17.819 killing process with pid 81883 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81883' 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 81883 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 81883 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:17.819 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.820 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:17.820 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:12:18.080 00:12:18.080 real 0m24.053s 00:12:18.080 user 0m40.121s 00:12:18.080 sys 0m5.353s 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:18.080 ************************************ 00:12:18.080 END TEST nvmf_zcopy 00:12:18.080 ************************************ 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:18.080 ************************************ 00:12:18.080 START TEST nvmf_nmic 00:12:18.080 ************************************ 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:18.080 * Looking for test storage... 00:12:18.080 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:18.080 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:18.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.080 --rc genhtml_branch_coverage=1 00:12:18.080 --rc genhtml_function_coverage=1 00:12:18.081 --rc genhtml_legend=1 00:12:18.081 --rc geninfo_all_blocks=1 00:12:18.081 --rc geninfo_unexecuted_blocks=1 00:12:18.081 00:12:18.081 ' 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:18.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.081 --rc genhtml_branch_coverage=1 00:12:18.081 --rc genhtml_function_coverage=1 00:12:18.081 --rc genhtml_legend=1 00:12:18.081 --rc geninfo_all_blocks=1 00:12:18.081 --rc geninfo_unexecuted_blocks=1 00:12:18.081 00:12:18.081 ' 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:18.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.081 --rc genhtml_branch_coverage=1 00:12:18.081 --rc genhtml_function_coverage=1 00:12:18.081 --rc genhtml_legend=1 00:12:18.081 --rc geninfo_all_blocks=1 00:12:18.081 --rc geninfo_unexecuted_blocks=1 00:12:18.081 00:12:18.081 ' 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:18.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.081 --rc genhtml_branch_coverage=1 00:12:18.081 --rc genhtml_function_coverage=1 00:12:18.081 --rc genhtml_legend=1 00:12:18.081 --rc geninfo_all_blocks=1 00:12:18.081 --rc geninfo_unexecuted_blocks=1 00:12:18.081 00:12:18.081 ' 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:18.081 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:18.081 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:18.082 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:18.343 Cannot find device "nvmf_init_br" 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:18.343 Cannot find device "nvmf_init_br2" 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:18.343 Cannot find device "nvmf_tgt_br" 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:18.343 Cannot find device "nvmf_tgt_br2" 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:18.343 Cannot find device "nvmf_init_br" 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:18.343 Cannot find device "nvmf_init_br2" 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:18.343 Cannot find device "nvmf_tgt_br" 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:18.343 Cannot find device "nvmf_tgt_br2" 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:18.343 Cannot find device "nvmf_br" 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:18.343 Cannot find device "nvmf_init_if" 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:18.343 Cannot find device "nvmf_init_if2" 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:18.343 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:18.343 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:18.343 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:18.604 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:18.604 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:18.604 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:18.604 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:18.604 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:18.604 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:18.604 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:18.605 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:18.605 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:18.605 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:18.605 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:18.605 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:18.605 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:18.605 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:18.605 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:18.605 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:12:18.605 00:12:18.605 --- 10.0.0.3 ping statistics --- 00:12:18.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.605 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:12:18.605 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:18.605 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:18.605 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:12:18.605 00:12:18.605 --- 10.0.0.4 ping statistics --- 00:12:18.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.605 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:12:18.605 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:18.605 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:18.605 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:12:18.605 00:12:18.605 --- 10.0.0.1 ping statistics --- 00:12:18.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.605 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:12:18.605 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:18.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:18.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:12:18.605 00:12:18.605 --- 10.0.0.2 ping statistics --- 00:12:18.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.605 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:12:18.605 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:18.605 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:12:18.605 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:18.605 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:18.605 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:18.605 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:18.605 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:18.605 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:18.605 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:18.605 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:18.605 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:18.605 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:18.605 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:18.605 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=82419 00:12:18.605 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 82419 00:12:18.605 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 82419 ']' 00:12:18.605 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.605 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:18.605 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.605 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:18.605 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:18.605 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:18.605 [2024-11-27 06:14:38.025156] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:12:18.605 [2024-11-27 06:14:38.025225] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.866 [2024-11-27 06:14:38.143078] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:18.866 [2024-11-27 06:14:38.170650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:18.866 [2024-11-27 06:14:38.207049] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:18.866 [2024-11-27 06:14:38.207105] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:18.866 [2024-11-27 06:14:38.207116] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:18.866 [2024-11-27 06:14:38.207123] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:18.866 [2024-11-27 06:14:38.207129] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:18.866 [2024-11-27 06:14:38.208353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.866 [2024-11-27 06:14:38.208628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:18.866 [2024-11-27 06:14:38.209599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:18.866 [2024-11-27 06:14:38.209752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.439 06:14:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:19.439 06:14:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:12:19.439 06:14:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:19.440 06:14:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:19.440 06:14:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:19.702 06:14:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:19.702 06:14:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:19.702 06:14:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.702 06:14:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:19.702 [2024-11-27 06:14:38.986966] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:19.702 06:14:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.702 06:14:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:19.702 06:14:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.702 06:14:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:19.702 Malloc0 00:12:19.702 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.702 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:19.702 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.702 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:19.702 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.702 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:19.702 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.702 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:19.702 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.702 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:19.702 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.702 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:19.702 [2024-11-27 06:14:39.049978] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:19.702 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.702 test case1: single bdev can't be used in multiple subsystems 00:12:19.702 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:19.702 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:19.702 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.702 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:19.702 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.702 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:12:19.702 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.702 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:19.702 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.703 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:12:19.703 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:19.703 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.703 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:19.703 [2024-11-27 06:14:39.073818] bdev.c:8507:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:19.703 [2024-11-27 06:14:39.073850] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:19.703 [2024-11-27 06:14:39.073858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.703 request: 00:12:19.703 2024/11/27 06:14:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:19.703 { 00:12:19.703 "method": "nvmf_subsystem_add_ns", 00:12:19.703 "params": { 00:12:19.703 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:19.703 "namespace": { 00:12:19.703 "bdev_name": "Malloc0", 00:12:19.703 "no_auto_visible": false, 00:12:19.703 "hide_metadata": false 00:12:19.703 } 00:12:19.703 } 00:12:19.703 } 00:12:19.703 Got JSON-RPC error response 00:12:19.703 GoRPCClient: error on JSON-RPC call 00:12:19.703 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:19.703 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:12:19.703 Adding namespace failed - expected result. 00:12:19.703 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:19.703 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:19.703 test case2: host connect to nvmf target in multiple paths 00:12:19.703 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:19.703 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:12:19.703 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.703 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:19.703 [2024-11-27 06:14:39.085939] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:12:19.703 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.703 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:12:19.964 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:12:19.964 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:19.964 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:12:19.964 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:19.964 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:19.964 06:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:12:22.493 06:14:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:22.493 06:14:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:22.493 06:14:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:22.493 06:14:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:22.493 06:14:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:22.493 06:14:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:12:22.493 06:14:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:22.493 [global] 00:12:22.493 thread=1 00:12:22.493 invalidate=1 00:12:22.493 rw=write 00:12:22.493 time_based=1 00:12:22.493 runtime=1 00:12:22.493 ioengine=libaio 00:12:22.493 direct=1 00:12:22.493 bs=4096 00:12:22.493 iodepth=1 00:12:22.493 norandommap=0 00:12:22.493 numjobs=1 00:12:22.493 00:12:22.493 verify_dump=1 00:12:22.493 verify_backlog=512 00:12:22.493 verify_state_save=0 00:12:22.493 do_verify=1 00:12:22.493 verify=crc32c-intel 00:12:22.493 [job0] 00:12:22.493 filename=/dev/nvme0n1 00:12:22.493 Could not set queue depth (nvme0n1) 00:12:22.493 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:22.493 fio-3.35 00:12:22.493 Starting 1 thread 00:12:23.453 00:12:23.453 job0: (groupid=0, jobs=1): err= 0: pid=82529: Wed Nov 27 06:14:42 2024 00:12:23.453 read: IOPS=4565, BW=17.8MiB/s (18.7MB/s)(17.9MiB/1001msec) 00:12:23.453 slat (nsec): min=5839, max=85411, avg=8087.94, stdev=4258.74 00:12:23.453 clat (usec): min=86, max=498, avg=109.13, stdev=26.78 00:12:23.453 lat (usec): min=92, max=508, avg=117.22, stdev=28.87 00:12:23.453 clat percentiles (usec): 00:12:23.453 | 1.00th=[ 91], 5.00th=[ 94], 10.00th=[ 96], 20.00th=[ 98], 00:12:23.453 | 30.00th=[ 100], 40.00th=[ 103], 50.00th=[ 105], 60.00th=[ 109], 00:12:23.453 | 70.00th=[ 112], 80.00th=[ 116], 90.00th=[ 121], 95.00th=[ 127], 00:12:23.453 | 99.00th=[ 269], 99.50th=[ 338], 99.90th=[ 449], 99.95th=[ 482], 00:12:23.453 | 99.99th=[ 498] 00:12:23.453 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:12:23.453 slat (nsec): min=9258, max=65168, avg=11408.80, stdev=2974.28 00:12:23.453 clat (usec): min=65, max=4606, avg=87.33, stdev=141.05 00:12:23.453 lat (usec): min=75, max=4621, avg=98.74, stdev=141.47 00:12:23.453 clat percentiles (usec): 00:12:23.453 | 1.00th=[ 70], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 75], 00:12:23.453 | 30.00th=[ 77], 40.00th=[ 78], 50.00th=[ 80], 60.00th=[ 82], 00:12:23.453 | 70.00th=[ 84], 80.00th=[ 87], 90.00th=[ 92], 95.00th=[ 95], 00:12:23.453 | 99.00th=[ 105], 99.50th=[ 115], 99.90th=[ 3064], 99.95th=[ 3228], 00:12:23.453 | 99.99th=[ 4621] 00:12:23.453 bw ( KiB/s): min=18768, max=18768, per=100.00%, avg=18768.00, stdev= 0.00, samples=1 00:12:23.453 iops : min= 4692, max= 4692, avg=4692.00, stdev= 0.00, samples=1 00:12:23.453 lat (usec) : 100=63.96%, 250=35.31%, 500=0.62%, 750=0.01% 00:12:23.453 lat (msec) : 2=0.01%, 4=0.07%, 10=0.02% 00:12:23.453 cpu : usr=1.50%, sys=7.20%, ctx=9178, majf=0, minf=5 00:12:23.453 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:23.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.453 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.453 issued rwts: total=4570,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.453 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:23.453 00:12:23.453 Run status group 0 (all jobs): 00:12:23.453 READ: bw=17.8MiB/s (18.7MB/s), 17.8MiB/s-17.8MiB/s (18.7MB/s-18.7MB/s), io=17.9MiB (18.7MB), run=1001-1001msec 00:12:23.453 WRITE: bw=18.0MiB/s (18.9MB/s), 18.0MiB/s-18.0MiB/s (18.9MB/s-18.9MB/s), io=18.0MiB (18.9MB), run=1001-1001msec 00:12:23.453 00:12:23.453 Disk stats (read/write): 00:12:23.454 nvme0n1: ios=4120/4096, merge=0/0, ticks=462/363, in_queue=825, util=89.97% 00:12:23.454 06:14:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:23.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:23.454 06:14:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:23.454 06:14:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:12:23.454 06:14:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:23.454 06:14:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.454 06:14:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.454 06:14:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:23.454 06:14:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:12:23.454 06:14:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:23.454 06:14:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:12:23.454 06:14:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:23.454 06:14:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:12:23.454 06:14:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:23.454 06:14:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:12:23.454 06:14:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:23.454 06:14:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:23.454 rmmod nvme_tcp 00:12:23.454 rmmod nvme_fabrics 00:12:23.454 rmmod nvme_keyring 00:12:23.454 06:14:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:23.454 06:14:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:12:23.454 06:14:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:12:23.454 06:14:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 82419 ']' 00:12:23.454 06:14:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 82419 00:12:23.454 06:14:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 82419 ']' 00:12:23.454 06:14:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 82419 00:12:23.454 06:14:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:12:23.454 06:14:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:23.454 06:14:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82419 00:12:23.454 killing process with pid 82419 00:12:23.454 06:14:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:23.454 06:14:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:23.454 06:14:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82419' 00:12:23.454 06:14:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 82419 00:12:23.454 06:14:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 82419 00:12:23.711 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:23.711 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:23.711 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:23.711 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:12:23.711 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:23.711 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:12:23.711 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:12:23.711 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:23.711 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:23.711 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:23.711 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:23.711 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:23.711 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:23.711 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:23.711 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:23.711 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:23.711 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:23.711 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:23.711 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:23.966 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:23.966 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:23.966 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:23.966 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:23.966 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.967 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:23.967 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.967 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:12:23.967 00:12:23.967 real 0m5.906s 00:12:23.967 user 0m19.327s 00:12:23.967 sys 0m1.179s 00:12:23.967 ************************************ 00:12:23.967 END TEST nvmf_nmic 00:12:23.967 ************************************ 00:12:23.967 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.967 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:23.967 06:14:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:23.967 06:14:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:23.967 06:14:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.967 06:14:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:23.967 ************************************ 00:12:23.967 START TEST nvmf_fio_target 00:12:23.967 ************************************ 00:12:23.967 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:23.967 * Looking for test storage... 00:12:23.967 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:23.967 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:23.967 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:12:23.967 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:24.224 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:24.224 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:24.224 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:24.224 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:24.224 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:24.224 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:24.224 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:24.224 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:24.224 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:24.224 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:24.224 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:24.224 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:24.224 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:12:24.224 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:12:24.224 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:24.224 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:24.224 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:12:24.224 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:12:24.224 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:24.224 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:12:24.224 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:24.224 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:12:24.224 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:12:24.224 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:24.224 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:12:24.224 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:24.224 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:24.224 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:24.224 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:12:24.224 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:24.224 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:24.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.224 --rc genhtml_branch_coverage=1 00:12:24.224 --rc genhtml_function_coverage=1 00:12:24.224 --rc genhtml_legend=1 00:12:24.224 --rc geninfo_all_blocks=1 00:12:24.224 --rc geninfo_unexecuted_blocks=1 00:12:24.224 00:12:24.224 ' 00:12:24.224 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:24.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.224 --rc genhtml_branch_coverage=1 00:12:24.224 --rc genhtml_function_coverage=1 00:12:24.224 --rc genhtml_legend=1 00:12:24.224 --rc geninfo_all_blocks=1 00:12:24.224 --rc geninfo_unexecuted_blocks=1 00:12:24.224 00:12:24.224 ' 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:24.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.225 --rc genhtml_branch_coverage=1 00:12:24.225 --rc genhtml_function_coverage=1 00:12:24.225 --rc genhtml_legend=1 00:12:24.225 --rc geninfo_all_blocks=1 00:12:24.225 --rc geninfo_unexecuted_blocks=1 00:12:24.225 00:12:24.225 ' 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:24.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.225 --rc genhtml_branch_coverage=1 00:12:24.225 --rc genhtml_function_coverage=1 00:12:24.225 --rc genhtml_legend=1 00:12:24.225 --rc geninfo_all_blocks=1 00:12:24.225 --rc geninfo_unexecuted_blocks=1 00:12:24.225 00:12:24.225 ' 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:24.225 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:24.225 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:24.226 Cannot find device "nvmf_init_br" 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:24.226 Cannot find device "nvmf_init_br2" 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:24.226 Cannot find device "nvmf_tgt_br" 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:24.226 Cannot find device "nvmf_tgt_br2" 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:24.226 Cannot find device "nvmf_init_br" 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:24.226 Cannot find device "nvmf_init_br2" 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:24.226 Cannot find device "nvmf_tgt_br" 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:24.226 Cannot find device "nvmf_tgt_br2" 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:24.226 Cannot find device "nvmf_br" 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:24.226 Cannot find device "nvmf_init_if" 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:24.226 Cannot find device "nvmf_init_if2" 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:24.226 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:24.226 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:24.226 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:24.484 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:24.484 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:12:24.484 00:12:24.484 --- 10.0.0.3 ping statistics --- 00:12:24.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.484 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:24.484 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:24.484 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:12:24.484 00:12:24.484 --- 10.0.0.4 ping statistics --- 00:12:24.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.484 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:24.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:24.484 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:12:24.484 00:12:24.484 --- 10.0.0.1 ping statistics --- 00:12:24.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.484 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:24.484 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:24.484 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:12:24.484 00:12:24.484 --- 10.0.0.2 ping statistics --- 00:12:24.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.484 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=82765 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 82765 00:12:24.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 82765 ']' 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:24.484 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.484 [2024-11-27 06:14:43.913778] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:12:24.484 [2024-11-27 06:14:43.913845] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:24.742 [2024-11-27 06:14:44.031624] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:24.742 [2024-11-27 06:14:44.055816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:24.742 [2024-11-27 06:14:44.078164] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:24.742 [2024-11-27 06:14:44.078326] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:24.742 [2024-11-27 06:14:44.078402] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:24.742 [2024-11-27 06:14:44.078460] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:24.742 [2024-11-27 06:14:44.078476] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:24.742 [2024-11-27 06:14:44.079184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.742 [2024-11-27 06:14:44.079255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:24.742 [2024-11-27 06:14:44.079355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.742 [2024-11-27 06:14:44.079289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:25.304 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:25.305 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:12:25.305 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:25.305 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:25.305 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.561 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:25.561 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:25.561 [2024-11-27 06:14:45.056283] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:25.561 06:14:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:25.818 06:14:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:25.818 06:14:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:26.075 06:14:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:26.075 06:14:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:26.332 06:14:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:26.332 06:14:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:26.590 06:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:26.590 06:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:26.925 06:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:27.183 06:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:27.183 06:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:27.442 06:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:27.442 06:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:27.699 06:14:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:27.699 06:14:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:27.957 06:14:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:27.957 06:14:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:27.957 06:14:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:28.216 06:14:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:28.216 06:14:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:28.473 06:14:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:28.731 [2024-11-27 06:14:48.041603] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:28.731 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:28.990 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:28.990 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:12:29.248 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:29.248 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:12:29.248 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:29.248 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:12:29.248 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:12:29.248 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:12:31.146 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:31.146 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:31.146 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:31.146 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:12:31.146 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:31.146 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:12:31.146 06:14:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:31.403 [global] 00:12:31.403 thread=1 00:12:31.403 invalidate=1 00:12:31.403 rw=write 00:12:31.403 time_based=1 00:12:31.403 runtime=1 00:12:31.403 ioengine=libaio 00:12:31.403 direct=1 00:12:31.403 bs=4096 00:12:31.403 iodepth=1 00:12:31.403 norandommap=0 00:12:31.403 numjobs=1 00:12:31.403 00:12:31.403 verify_dump=1 00:12:31.403 verify_backlog=512 00:12:31.403 verify_state_save=0 00:12:31.403 do_verify=1 00:12:31.403 verify=crc32c-intel 00:12:31.403 [job0] 00:12:31.403 filename=/dev/nvme0n1 00:12:31.403 [job1] 00:12:31.403 filename=/dev/nvme0n2 00:12:31.403 [job2] 00:12:31.403 filename=/dev/nvme0n3 00:12:31.403 [job3] 00:12:31.403 filename=/dev/nvme0n4 00:12:31.403 Could not set queue depth (nvme0n1) 00:12:31.403 Could not set queue depth (nvme0n2) 00:12:31.403 Could not set queue depth (nvme0n3) 00:12:31.403 Could not set queue depth (nvme0n4) 00:12:31.403 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:31.403 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:31.403 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:31.404 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:31.404 fio-3.35 00:12:31.404 Starting 4 threads 00:12:32.776 00:12:32.776 job0: (groupid=0, jobs=1): err= 0: pid=83049: Wed Nov 27 06:14:52 2024 00:12:32.776 read: IOPS=1636, BW=6545KiB/s (6703kB/s)(6552KiB/1001msec) 00:12:32.776 slat (nsec): min=4880, max=89114, avg=10133.61, stdev=9541.92 00:12:32.776 clat (usec): min=108, max=40938, avg=294.77, stdev=1012.59 00:12:32.776 lat (usec): min=113, max=40944, avg=304.91, stdev=1013.10 00:12:32.776 clat percentiles (usec): 00:12:32.776 | 1.00th=[ 121], 5.00th=[ 169], 10.00th=[ 178], 20.00th=[ 184], 00:12:32.776 | 30.00th=[ 192], 40.00th=[ 206], 50.00th=[ 243], 60.00th=[ 255], 00:12:32.776 | 70.00th=[ 273], 80.00th=[ 322], 90.00th=[ 429], 95.00th=[ 545], 00:12:32.776 | 99.00th=[ 766], 99.50th=[ 816], 99.90th=[ 938], 99.95th=[41157], 00:12:32.776 | 99.99th=[41157] 00:12:32.776 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:32.776 slat (usec): min=5, max=114, avg=16.39, stdev=11.62 00:12:32.776 clat (usec): min=62, max=735, avg=225.55, stdev=111.41 00:12:32.776 lat (usec): min=82, max=771, avg=241.94, stdev=112.67 00:12:32.776 clat percentiles (usec): 00:12:32.776 | 1.00th=[ 81], 5.00th=[ 114], 10.00th=[ 131], 20.00th=[ 139], 00:12:32.776 | 30.00th=[ 147], 40.00th=[ 163], 50.00th=[ 198], 60.00th=[ 210], 00:12:32.776 | 70.00th=[ 227], 80.00th=[ 343], 90.00th=[ 412], 95.00th=[ 457], 00:12:32.776 | 99.00th=[ 523], 99.50th=[ 562], 99.90th=[ 652], 99.95th=[ 660], 00:12:32.776 | 99.99th=[ 734] 00:12:32.777 bw ( KiB/s): min= 8192, max= 8192, per=21.64%, avg=8192.00, stdev= 0.00, samples=1 00:12:32.777 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:32.777 lat (usec) : 100=1.93%, 250=64.98%, 500=29.19%, 750=3.31%, 1000=0.57% 00:12:32.777 lat (msec) : 50=0.03% 00:12:32.777 cpu : usr=1.10%, sys=4.40%, ctx=3697, majf=0, minf=13 00:12:32.777 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:32.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.777 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.777 issued rwts: total=1638,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:32.777 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:32.777 job1: (groupid=0, jobs=1): err= 0: pid=83050: Wed Nov 27 06:14:52 2024 00:12:32.777 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:12:32.777 slat (usec): min=6, max=105, avg=26.31, stdev=18.33 00:12:32.777 clat (usec): min=82, max=745, avg=273.26, stdev=121.03 00:12:32.777 lat (usec): min=102, max=790, avg=299.57, stdev=133.08 00:12:32.777 clat percentiles (usec): 00:12:32.777 | 1.00th=[ 98], 5.00th=[ 102], 10.00th=[ 108], 20.00th=[ 121], 00:12:32.777 | 30.00th=[ 210], 40.00th=[ 237], 50.00th=[ 285], 60.00th=[ 322], 00:12:32.777 | 70.00th=[ 351], 80.00th=[ 379], 90.00th=[ 429], 95.00th=[ 465], 00:12:32.777 | 99.00th=[ 537], 99.50th=[ 603], 99.90th=[ 619], 99.95th=[ 750], 00:12:32.777 | 99.99th=[ 750] 00:12:32.777 write: IOPS=1610, BW=6442KiB/s (6596kB/s)(6448KiB/1001msec); 0 zone resets 00:12:32.777 slat (usec): min=9, max=128, avg=44.20, stdev=22.77 00:12:32.777 clat (usec): min=67, max=3721, avg=283.01, stdev=155.11 00:12:32.777 lat (usec): min=81, max=3824, avg=327.21, stdev=167.90 00:12:32.777 clat percentiles (usec): 00:12:32.777 | 1.00th=[ 77], 5.00th=[ 83], 10.00th=[ 89], 20.00th=[ 110], 00:12:32.777 | 30.00th=[ 182], 40.00th=[ 297], 50.00th=[ 330], 60.00th=[ 351], 00:12:32.777 | 70.00th=[ 367], 80.00th=[ 392], 90.00th=[ 424], 95.00th=[ 449], 00:12:32.777 | 99.00th=[ 506], 99.50th=[ 537], 99.90th=[ 709], 99.95th=[ 3720], 00:12:32.777 | 99.99th=[ 3720] 00:12:32.777 bw ( KiB/s): min= 7464, max= 7464, per=19.71%, avg=7464.00, stdev= 0.00, samples=1 00:12:32.777 iops : min= 1866, max= 1866, avg=1866.00, stdev= 0.00, samples=1 00:12:32.777 lat (usec) : 100=10.13%, 250=30.15%, 500=57.94%, 750=1.75% 00:12:32.777 lat (msec) : 4=0.03% 00:12:32.777 cpu : usr=2.40%, sys=8.60%, ctx=3148, majf=0, minf=16 00:12:32.777 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:32.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.777 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.777 issued rwts: total=1536,1612,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:32.777 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:32.777 job2: (groupid=0, jobs=1): err= 0: pid=83051: Wed Nov 27 06:14:52 2024 00:12:32.777 read: IOPS=1637, BW=6549KiB/s (6707kB/s)(6556KiB/1001msec) 00:12:32.777 slat (nsec): min=4249, max=83046, avg=9980.91, stdev=9912.04 00:12:32.777 clat (usec): min=118, max=40868, avg=295.28, stdev=1010.38 00:12:32.777 lat (usec): min=124, max=40894, avg=305.26, stdev=1011.41 00:12:32.777 clat percentiles (usec): 00:12:32.777 | 1.00th=[ 126], 5.00th=[ 169], 10.00th=[ 178], 20.00th=[ 184], 00:12:32.777 | 30.00th=[ 192], 40.00th=[ 210], 50.00th=[ 243], 60.00th=[ 258], 00:12:32.777 | 70.00th=[ 277], 80.00th=[ 318], 90.00th=[ 433], 95.00th=[ 553], 00:12:32.777 | 99.00th=[ 775], 99.50th=[ 807], 99.90th=[ 889], 99.95th=[40633], 00:12:32.777 | 99.99th=[40633] 00:12:32.777 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:32.777 slat (usec): min=5, max=108, avg=17.77, stdev=11.80 00:12:32.777 clat (usec): min=34, max=803, avg=223.55, stdev=108.96 00:12:32.777 lat (usec): min=92, max=815, avg=241.32, stdev=112.16 00:12:32.777 clat percentiles (usec): 00:12:32.777 | 1.00th=[ 89], 5.00th=[ 115], 10.00th=[ 130], 20.00th=[ 139], 00:12:32.777 | 30.00th=[ 147], 40.00th=[ 163], 50.00th=[ 196], 60.00th=[ 210], 00:12:32.777 | 70.00th=[ 225], 80.00th=[ 330], 90.00th=[ 404], 95.00th=[ 445], 00:12:32.777 | 99.00th=[ 529], 99.50th=[ 562], 99.90th=[ 652], 99.95th=[ 652], 00:12:32.777 | 99.99th=[ 807] 00:12:32.777 bw ( KiB/s): min= 8192, max= 8192, per=21.64%, avg=8192.00, stdev= 0.00, samples=1 00:12:32.777 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:32.777 lat (usec) : 50=0.03%, 100=1.60%, 250=64.90%, 500=29.26%, 750=3.66% 00:12:32.777 lat (usec) : 1000=0.52% 00:12:32.777 lat (msec) : 50=0.03% 00:12:32.777 cpu : usr=1.40%, sys=4.50%, ctx=3714, majf=0, minf=9 00:12:32.777 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:32.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.777 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.777 issued rwts: total=1639,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:32.777 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:32.777 job3: (groupid=0, jobs=1): err= 0: pid=83052: Wed Nov 27 06:14:52 2024 00:12:32.777 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:12:32.777 slat (nsec): min=6023, max=94582, avg=9174.15, stdev=8311.69 00:12:32.777 clat (usec): min=96, max=3960, avg=135.98, stdev=126.69 00:12:32.777 lat (usec): min=104, max=3966, avg=145.15, stdev=130.08 00:12:32.777 clat percentiles (usec): 00:12:32.777 | 1.00th=[ 101], 5.00th=[ 105], 10.00th=[ 108], 20.00th=[ 111], 00:12:32.777 | 30.00th=[ 113], 40.00th=[ 115], 50.00th=[ 117], 60.00th=[ 119], 00:12:32.777 | 70.00th=[ 122], 80.00th=[ 126], 90.00th=[ 137], 95.00th=[ 310], 00:12:32.777 | 99.00th=[ 424], 99.50th=[ 461], 99.90th=[ 2933], 99.95th=[ 3294], 00:12:32.777 | 99.99th=[ 3949] 00:12:32.777 write: IOPS=3763, BW=14.7MiB/s (15.4MB/s)(14.7MiB/1001msec); 0 zone resets 00:12:32.777 slat (usec): min=9, max=118, avg=13.67, stdev=10.52 00:12:32.777 clat (usec): min=75, max=1639, avg=111.40, stdev=71.14 00:12:32.777 lat (usec): min=85, max=1649, avg=125.07, stdev=79.20 00:12:32.777 clat percentiles (usec): 00:12:32.777 | 1.00th=[ 81], 5.00th=[ 84], 10.00th=[ 86], 20.00th=[ 88], 00:12:32.777 | 30.00th=[ 90], 40.00th=[ 91], 50.00th=[ 93], 60.00th=[ 95], 00:12:32.777 | 70.00th=[ 98], 80.00th=[ 102], 90.00th=[ 114], 95.00th=[ 318], 00:12:32.777 | 99.00th=[ 408], 99.50th=[ 433], 99.90th=[ 510], 99.95th=[ 537], 00:12:32.777 | 99.99th=[ 1647] 00:12:32.777 bw ( KiB/s): min=16384, max=16384, per=43.27%, avg=16384.00, stdev= 0.00, samples=1 00:12:32.777 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:12:32.777 lat (usec) : 100=39.48%, 250=54.26%, 500=6.07%, 750=0.10% 00:12:32.777 lat (msec) : 2=0.04%, 4=0.05% 00:12:32.777 cpu : usr=1.50%, sys=6.80%, ctx=7351, majf=0, minf=11 00:12:32.777 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:32.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.777 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.777 issued rwts: total=3584,3767,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:32.777 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:32.777 00:12:32.777 Run status group 0 (all jobs): 00:12:32.777 READ: bw=32.8MiB/s (34.4MB/s), 6138KiB/s-14.0MiB/s (6285kB/s-14.7MB/s), io=32.8MiB (34.4MB), run=1001-1001msec 00:12:32.777 WRITE: bw=37.0MiB/s (38.8MB/s), 6442KiB/s-14.7MiB/s (6596kB/s-15.4MB/s), io=37.0MiB (38.8MB), run=1001-1001msec 00:12:32.777 00:12:32.777 Disk stats (read/write): 00:12:32.777 nvme0n1: ios=1586/1694, merge=0/0, ticks=469/371, in_queue=840, util=89.08% 00:12:32.777 nvme0n2: ios=1090/1536, merge=0/0, ticks=336/451, in_queue=787, util=88.86% 00:12:32.777 nvme0n3: ios=1536/1695, merge=0/0, ticks=423/367, in_queue=790, util=89.25% 00:12:32.777 nvme0n4: ios=3072/3158, merge=0/0, ticks=427/387, in_queue=814, util=89.29% 00:12:32.777 06:14:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:32.777 [global] 00:12:32.777 thread=1 00:12:32.777 invalidate=1 00:12:32.777 rw=randwrite 00:12:32.777 time_based=1 00:12:32.777 runtime=1 00:12:32.777 ioengine=libaio 00:12:32.777 direct=1 00:12:32.777 bs=4096 00:12:32.777 iodepth=1 00:12:32.777 norandommap=0 00:12:32.777 numjobs=1 00:12:32.777 00:12:32.777 verify_dump=1 00:12:32.777 verify_backlog=512 00:12:32.777 verify_state_save=0 00:12:32.777 do_verify=1 00:12:32.777 verify=crc32c-intel 00:12:32.777 [job0] 00:12:32.777 filename=/dev/nvme0n1 00:12:32.777 [job1] 00:12:32.777 filename=/dev/nvme0n2 00:12:32.777 [job2] 00:12:32.777 filename=/dev/nvme0n3 00:12:32.777 [job3] 00:12:32.777 filename=/dev/nvme0n4 00:12:32.777 Could not set queue depth (nvme0n1) 00:12:32.777 Could not set queue depth (nvme0n2) 00:12:32.777 Could not set queue depth (nvme0n3) 00:12:32.777 Could not set queue depth (nvme0n4) 00:12:32.777 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:32.777 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:32.777 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:32.777 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:32.778 fio-3.35 00:12:32.778 Starting 4 threads 00:12:34.152 00:12:34.152 job0: (groupid=0, jobs=1): err= 0: pid=83109: Wed Nov 27 06:14:53 2024 00:12:34.152 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:12:34.152 slat (nsec): min=5939, max=83891, avg=13905.35, stdev=12861.29 00:12:34.152 clat (usec): min=93, max=1304, avg=185.47, stdev=84.53 00:12:34.152 lat (usec): min=99, max=1311, avg=199.38, stdev=94.69 00:12:34.152 clat percentiles (usec): 00:12:34.152 | 1.00th=[ 98], 5.00th=[ 101], 10.00th=[ 104], 20.00th=[ 111], 00:12:34.152 | 30.00th=[ 120], 40.00th=[ 161], 50.00th=[ 169], 60.00th=[ 178], 00:12:34.152 | 70.00th=[ 186], 80.00th=[ 265], 90.00th=[ 314], 95.00th=[ 351], 00:12:34.152 | 99.00th=[ 416], 99.50th=[ 445], 99.90th=[ 537], 99.95th=[ 578], 00:12:34.152 | 99.99th=[ 1303] 00:12:34.152 write: IOPS=2457, BW=9830KiB/s (10.1MB/s)(9840KiB/1001msec); 0 zone resets 00:12:34.152 slat (usec): min=9, max=114, avg=27.99, stdev=19.76 00:12:34.152 clat (usec): min=69, max=659, avg=208.38, stdev=116.15 00:12:34.152 lat (usec): min=78, max=743, avg=236.37, stdev=131.38 00:12:34.152 clat percentiles (usec): 00:12:34.152 | 1.00th=[ 75], 5.00th=[ 79], 10.00th=[ 82], 20.00th=[ 90], 00:12:34.152 | 30.00th=[ 131], 40.00th=[ 139], 50.00th=[ 149], 60.00th=[ 255], 00:12:34.152 | 70.00th=[ 306], 80.00th=[ 334], 90.00th=[ 367], 95.00th=[ 396], 00:12:34.152 | 99.00th=[ 453], 99.50th=[ 478], 99.90th=[ 553], 99.95th=[ 562], 00:12:34.152 | 99.99th=[ 660] 00:12:34.152 bw ( KiB/s): min=12312, max=12312, per=31.02%, avg=12312.00, stdev= 0.00, samples=1 00:12:34.152 iops : min= 3078, max= 3078, avg=3078.00, stdev= 0.00, samples=1 00:12:34.152 lat (usec) : 100=14.82%, 250=53.42%, 500=31.57%, 750=0.18% 00:12:34.152 lat (msec) : 2=0.02% 00:12:34.152 cpu : usr=1.90%, sys=7.60%, ctx=4509, majf=0, minf=11 00:12:34.152 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:34.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.152 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.152 issued rwts: total=2048,2460,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:34.152 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:34.152 job1: (groupid=0, jobs=1): err= 0: pid=83110: Wed Nov 27 06:14:53 2024 00:12:34.152 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:12:34.152 slat (nsec): min=5904, max=82475, avg=13394.12, stdev=10248.85 00:12:34.152 clat (usec): min=90, max=1609, avg=185.87, stdev=88.46 00:12:34.152 lat (usec): min=99, max=1619, avg=199.26, stdev=95.52 00:12:34.152 clat percentiles (usec): 00:12:34.152 | 1.00th=[ 97], 5.00th=[ 102], 10.00th=[ 104], 20.00th=[ 110], 00:12:34.152 | 30.00th=[ 117], 40.00th=[ 167], 50.00th=[ 176], 60.00th=[ 182], 00:12:34.152 | 70.00th=[ 192], 80.00th=[ 237], 90.00th=[ 318], 95.00th=[ 367], 00:12:34.152 | 99.00th=[ 445], 99.50th=[ 469], 99.90th=[ 562], 99.95th=[ 578], 00:12:34.152 | 99.99th=[ 1614] 00:12:34.152 write: IOPS=2609, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1001msec); 0 zone resets 00:12:34.152 slat (usec): min=9, max=134, avg=21.96, stdev=17.75 00:12:34.152 clat (usec): min=70, max=652, avg=161.91, stdev=102.14 00:12:34.152 lat (usec): min=80, max=694, avg=183.87, stdev=115.45 00:12:34.152 clat percentiles (usec): 00:12:34.152 | 1.00th=[ 75], 5.00th=[ 79], 10.00th=[ 82], 20.00th=[ 86], 00:12:34.152 | 30.00th=[ 90], 40.00th=[ 110], 50.00th=[ 135], 60.00th=[ 145], 00:12:34.152 | 70.00th=[ 155], 80.00th=[ 184], 90.00th=[ 347], 95.00th=[ 388], 00:12:34.152 | 99.00th=[ 453], 99.50th=[ 474], 99.90th=[ 578], 99.95th=[ 619], 00:12:34.152 | 99.99th=[ 652] 00:12:34.152 bw ( KiB/s): min= 8984, max= 8984, per=22.64%, avg=8984.00, stdev= 0.00, samples=1 00:12:34.152 iops : min= 2246, max= 2246, avg=2246.00, stdev= 0.00, samples=1 00:12:34.152 lat (usec) : 100=20.69%, 250=61.33%, 500=17.79%, 750=0.17% 00:12:34.152 lat (msec) : 2=0.02% 00:12:34.152 cpu : usr=1.90%, sys=7.00%, ctx=5173, majf=0, minf=13 00:12:34.152 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:34.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.152 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.152 issued rwts: total=2560,2612,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:34.152 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:34.152 job2: (groupid=0, jobs=1): err= 0: pid=83111: Wed Nov 27 06:14:53 2024 00:12:34.152 read: IOPS=2699, BW=10.5MiB/s (11.1MB/s)(10.6MiB/1001msec) 00:12:34.152 slat (usec): min=6, max=109, avg=10.55, stdev= 6.47 00:12:34.152 clat (usec): min=38, max=551, avg=169.02, stdev=58.90 00:12:34.152 lat (usec): min=107, max=587, avg=179.57, stdev=63.11 00:12:34.152 clat percentiles (usec): 00:12:34.152 | 1.00th=[ 108], 5.00th=[ 111], 10.00th=[ 114], 20.00th=[ 119], 00:12:34.152 | 30.00th=[ 123], 40.00th=[ 133], 50.00th=[ 176], 60.00th=[ 182], 00:12:34.152 | 70.00th=[ 188], 80.00th=[ 196], 90.00th=[ 219], 95.00th=[ 293], 00:12:34.152 | 99.00th=[ 388], 99.50th=[ 416], 99.90th=[ 498], 99.95th=[ 537], 00:12:34.152 | 99.99th=[ 553] 00:12:34.152 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:12:34.152 slat (usec): min=9, max=135, avg=18.07, stdev=12.29 00:12:34.152 clat (usec): min=72, max=527, avg=146.83, stdev=79.87 00:12:34.152 lat (usec): min=87, max=610, avg=164.90, stdev=89.08 00:12:34.152 clat percentiles (usec): 00:12:34.152 | 1.00th=[ 83], 5.00th=[ 86], 10.00th=[ 88], 20.00th=[ 91], 00:12:34.152 | 30.00th=[ 95], 40.00th=[ 100], 50.00th=[ 118], 60.00th=[ 151], 00:12:34.152 | 70.00th=[ 157], 80.00th=[ 165], 90.00th=[ 289], 95.00th=[ 355], 00:12:34.152 | 99.00th=[ 420], 99.50th=[ 449], 99.90th=[ 478], 99.95th=[ 498], 00:12:34.152 | 99.99th=[ 529] 00:12:34.152 bw ( KiB/s): min=12288, max=12288, per=30.96%, avg=12288.00, stdev= 0.00, samples=1 00:12:34.152 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:34.152 lat (usec) : 50=0.02%, 100=21.61%, 250=69.24%, 500=9.08%, 750=0.05% 00:12:34.152 cpu : usr=1.50%, sys=6.60%, ctx=5777, majf=0, minf=13 00:12:34.152 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:34.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.152 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.152 issued rwts: total=2702,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:34.152 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:34.152 job3: (groupid=0, jobs=1): err= 0: pid=83112: Wed Nov 27 06:14:53 2024 00:12:34.152 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:12:34.152 slat (usec): min=6, max=103, avg=26.37, stdev=17.59 00:12:34.152 clat (usec): min=100, max=771, avg=279.07, stdev=85.42 00:12:34.152 lat (usec): min=108, max=803, avg=305.44, stdev=97.57 00:12:34.152 clat percentiles (usec): 00:12:34.152 | 1.00th=[ 111], 5.00th=[ 174], 10.00th=[ 182], 20.00th=[ 194], 00:12:34.152 | 30.00th=[ 227], 40.00th=[ 260], 50.00th=[ 277], 60.00th=[ 293], 00:12:34.152 | 70.00th=[ 318], 80.00th=[ 347], 90.00th=[ 392], 95.00th=[ 424], 00:12:34.152 | 99.00th=[ 494], 99.50th=[ 545], 99.90th=[ 701], 99.95th=[ 775], 00:12:34.152 | 99.99th=[ 775] 00:12:34.152 write: IOPS=1785, BW=7141KiB/s (7312kB/s)(7148KiB/1001msec); 0 zone resets 00:12:34.152 slat (usec): min=9, max=134, avg=34.48, stdev=21.07 00:12:34.152 clat (usec): min=83, max=1831, avg=256.10, stdev=119.03 00:12:34.152 lat (usec): min=93, max=1877, avg=290.58, stdev=132.63 00:12:34.152 clat percentiles (usec): 00:12:34.152 | 1.00th=[ 87], 5.00th=[ 93], 10.00th=[ 102], 20.00th=[ 149], 00:12:34.152 | 30.00th=[ 165], 40.00th=[ 182], 50.00th=[ 281], 60.00th=[ 314], 00:12:34.152 | 70.00th=[ 338], 80.00th=[ 363], 90.00th=[ 400], 95.00th=[ 424], 00:12:34.152 | 99.00th=[ 490], 99.50th=[ 519], 99.90th=[ 709], 99.95th=[ 1827], 00:12:34.152 | 99.99th=[ 1827] 00:12:34.152 bw ( KiB/s): min= 8192, max= 8192, per=20.64%, avg=8192.00, stdev= 0.00, samples=1 00:12:34.152 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:34.152 lat (usec) : 100=4.85%, 250=36.53%, 500=57.78%, 750=0.78%, 1000=0.03% 00:12:34.152 lat (msec) : 2=0.03% 00:12:34.152 cpu : usr=2.00%, sys=8.10%, ctx=3323, majf=0, minf=11 00:12:34.152 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:34.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.152 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.152 issued rwts: total=1536,1787,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:34.152 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:34.152 00:12:34.152 Run status group 0 (all jobs): 00:12:34.152 READ: bw=34.5MiB/s (36.2MB/s), 6138KiB/s-10.5MiB/s (6285kB/s-11.1MB/s), io=34.6MiB (36.2MB), run=1001-1001msec 00:12:34.152 WRITE: bw=38.8MiB/s (40.6MB/s), 7141KiB/s-12.0MiB/s (7312kB/s-12.6MB/s), io=38.8MiB (40.7MB), run=1001-1001msec 00:12:34.152 00:12:34.152 Disk stats (read/write): 00:12:34.152 nvme0n1: ios=2097/2099, merge=0/0, ticks=428/428, in_queue=856, util=89.67% 00:12:34.152 nvme0n2: ios=2084/2522, merge=0/0, ticks=419/414, in_queue=833, util=89.80% 00:12:34.152 nvme0n3: ios=2529/2560, merge=0/0, ticks=410/400, in_queue=810, util=89.58% 00:12:34.152 nvme0n4: ios=1434/1536, merge=0/0, ticks=399/394, in_queue=793, util=89.75% 00:12:34.152 06:14:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:34.152 [global] 00:12:34.152 thread=1 00:12:34.152 invalidate=1 00:12:34.152 rw=write 00:12:34.152 time_based=1 00:12:34.152 runtime=1 00:12:34.152 ioengine=libaio 00:12:34.152 direct=1 00:12:34.152 bs=4096 00:12:34.152 iodepth=128 00:12:34.152 norandommap=0 00:12:34.152 numjobs=1 00:12:34.152 00:12:34.153 verify_dump=1 00:12:34.153 verify_backlog=512 00:12:34.153 verify_state_save=0 00:12:34.153 do_verify=1 00:12:34.153 verify=crc32c-intel 00:12:34.153 [job0] 00:12:34.153 filename=/dev/nvme0n1 00:12:34.153 [job1] 00:12:34.153 filename=/dev/nvme0n2 00:12:34.153 [job2] 00:12:34.153 filename=/dev/nvme0n3 00:12:34.153 [job3] 00:12:34.153 filename=/dev/nvme0n4 00:12:34.153 Could not set queue depth (nvme0n1) 00:12:34.153 Could not set queue depth (nvme0n2) 00:12:34.153 Could not set queue depth (nvme0n3) 00:12:34.153 Could not set queue depth (nvme0n4) 00:12:34.153 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:34.153 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:34.153 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:34.153 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:34.153 fio-3.35 00:12:34.153 Starting 4 threads 00:12:35.526 00:12:35.526 job0: (groupid=0, jobs=1): err= 0: pid=83166: Wed Nov 27 06:14:54 2024 00:12:35.526 read: IOPS=6561, BW=25.6MiB/s (26.9MB/s)(25.7MiB/1003msec) 00:12:35.526 slat (usec): min=2, max=2445, avg=73.12, stdev=352.03 00:12:35.526 clat (usec): min=540, max=12150, avg=9669.79, stdev=773.39 00:12:35.526 lat (usec): min=2742, max=12436, avg=9742.90, stdev=706.46 00:12:35.526 clat percentiles (usec): 00:12:35.526 | 1.00th=[ 7242], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[ 9503], 00:12:35.526 | 30.00th=[ 9634], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[ 9896], 00:12:35.526 | 70.00th=[ 9896], 80.00th=[10028], 90.00th=[10159], 95.00th=[10421], 00:12:35.526 | 99.00th=[10945], 99.50th=[11076], 99.90th=[11207], 99.95th=[11469], 00:12:35.526 | 99.99th=[12125] 00:12:35.526 write: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec); 0 zone resets 00:12:35.526 slat (usec): min=6, max=3912, avg=73.78, stdev=339.80 00:12:35.526 clat (usec): min=7102, max=11749, avg=9495.79, stdev=783.80 00:12:35.526 lat (usec): min=7169, max=11760, avg=9569.57, stdev=727.64 00:12:35.526 clat percentiles (usec): 00:12:35.526 | 1.00th=[ 7439], 5.00th=[ 7832], 10.00th=[ 8225], 20.00th=[ 9241], 00:12:35.526 | 30.00th=[ 9372], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9634], 00:12:35.526 | 70.00th=[ 9896], 80.00th=[10028], 90.00th=[10290], 95.00th=[10683], 00:12:35.526 | 99.00th=[11600], 99.50th=[11600], 99.90th=[11731], 99.95th=[11731], 00:12:35.526 | 99.99th=[11731] 00:12:35.526 bw ( KiB/s): min=25736, max=27567, per=34.62%, avg=26651.50, stdev=1294.71, samples=2 00:12:35.526 iops : min= 6434, max= 6891, avg=6662.50, stdev=323.15, samples=2 00:12:35.526 lat (usec) : 750=0.01% 00:12:35.526 lat (msec) : 4=0.24%, 10=78.54%, 20=21.21% 00:12:35.526 cpu : usr=2.69%, sys=10.98%, ctx=629, majf=0, minf=5 00:12:35.526 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:12:35.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.526 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:35.526 issued rwts: total=6581,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.526 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:35.526 job1: (groupid=0, jobs=1): err= 0: pid=83167: Wed Nov 27 06:14:54 2024 00:12:35.526 read: IOPS=2837, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1004msec) 00:12:35.526 slat (usec): min=3, max=12318, avg=182.83, stdev=1027.80 00:12:35.526 clat (usec): min=406, max=45683, avg=23686.72, stdev=7116.55 00:12:35.526 lat (usec): min=8193, max=45699, avg=23869.56, stdev=7081.33 00:12:35.526 clat percentiles (usec): 00:12:35.526 | 1.00th=[ 8586], 5.00th=[16057], 10.00th=[18744], 20.00th=[19268], 00:12:35.526 | 30.00th=[19530], 40.00th=[19792], 50.00th=[20055], 60.00th=[21103], 00:12:35.526 | 70.00th=[24773], 80.00th=[30540], 90.00th=[33817], 95.00th=[39584], 00:12:35.526 | 99.00th=[45351], 99.50th=[45351], 99.90th=[45876], 99.95th=[45876], 00:12:35.526 | 99.99th=[45876] 00:12:35.526 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:12:35.526 slat (usec): min=6, max=12798, avg=151.49, stdev=831.41 00:12:35.526 clat (usec): min=10343, max=32433, avg=19054.38, stdev=6116.73 00:12:35.526 lat (usec): min=13175, max=32455, avg=19205.87, stdev=6111.17 00:12:35.526 clat percentiles (usec): 00:12:35.526 | 1.00th=[11207], 5.00th=[13698], 10.00th=[13960], 20.00th=[14091], 00:12:35.526 | 30.00th=[14222], 40.00th=[14484], 50.00th=[16188], 60.00th=[18220], 00:12:35.526 | 70.00th=[22938], 80.00th=[25297], 90.00th=[30016], 95.00th=[30278], 00:12:35.526 | 99.00th=[32375], 99.50th=[32375], 99.90th=[32375], 99.95th=[32375], 00:12:35.526 | 99.99th=[32375] 00:12:35.526 bw ( KiB/s): min=12288, max=12312, per=15.98%, avg=12300.00, stdev=16.97, samples=2 00:12:35.526 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:12:35.526 lat (usec) : 500=0.02% 00:12:35.526 lat (msec) : 10=0.54%, 20=55.01%, 50=44.44% 00:12:35.526 cpu : usr=1.60%, sys=5.98%, ctx=187, majf=0, minf=15 00:12:35.526 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:12:35.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.526 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:35.526 issued rwts: total=2849,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.526 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:35.526 job2: (groupid=0, jobs=1): err= 0: pid=83168: Wed Nov 27 06:14:54 2024 00:12:35.526 read: IOPS=3310, BW=12.9MiB/s (13.6MB/s)(13.0MiB/1004msec) 00:12:35.526 slat (usec): min=3, max=7413, avg=128.32, stdev=683.14 00:12:35.526 clat (usec): min=1773, max=28013, avg=15418.70, stdev=2810.50 00:12:35.526 lat (usec): min=5070, max=28024, avg=15547.02, stdev=2866.25 00:12:35.526 clat percentiles (usec): 00:12:35.526 | 1.00th=[ 8356], 5.00th=[11338], 10.00th=[12518], 20.00th=[13304], 00:12:35.526 | 30.00th=[14222], 40.00th=[15139], 50.00th=[15270], 60.00th=[15795], 00:12:35.526 | 70.00th=[16319], 80.00th=[16909], 90.00th=[19006], 95.00th=[20317], 00:12:35.526 | 99.00th=[23462], 99.50th=[24773], 99.90th=[27919], 99.95th=[27919], 00:12:35.526 | 99.99th=[27919] 00:12:35.526 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:12:35.526 slat (usec): min=8, max=6664, avg=154.80, stdev=565.04 00:12:35.526 clat (usec): min=10909, max=31001, avg=21133.76, stdev=4213.29 00:12:35.526 lat (usec): min=10924, max=31016, avg=21288.56, stdev=4234.51 00:12:35.526 clat percentiles (usec): 00:12:35.526 | 1.00th=[12256], 5.00th=[14484], 10.00th=[15008], 20.00th=[18220], 00:12:35.526 | 30.00th=[19530], 40.00th=[20055], 50.00th=[20317], 60.00th=[21627], 00:12:35.526 | 70.00th=[22676], 80.00th=[25035], 90.00th=[27919], 95.00th=[28443], 00:12:35.526 | 99.00th=[28967], 99.50th=[30802], 99.90th=[31065], 99.95th=[31065], 00:12:35.526 | 99.99th=[31065] 00:12:35.526 bw ( KiB/s): min=13320, max=15352, per=18.62%, avg=14336.00, stdev=1436.84, samples=2 00:12:35.526 iops : min= 3330, max= 3838, avg=3584.00, stdev=359.21, samples=2 00:12:35.526 lat (msec) : 2=0.01%, 10=1.26%, 20=64.20%, 50=34.53% 00:12:35.526 cpu : usr=2.29%, sys=7.58%, ctx=495, majf=0, minf=7 00:12:35.526 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:35.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.526 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:35.526 issued rwts: total=3324,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.526 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:35.526 job3: (groupid=0, jobs=1): err= 0: pid=83169: Wed Nov 27 06:14:54 2024 00:12:35.526 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:12:35.526 slat (usec): min=4, max=3107, avg=85.82, stdev=414.16 00:12:35.526 clat (usec): min=8242, max=13177, avg=11094.96, stdev=653.60 00:12:35.526 lat (usec): min=8564, max=14036, avg=11180.79, stdev=535.62 00:12:35.526 clat percentiles (usec): 00:12:35.526 | 1.00th=[ 8717], 5.00th=[ 9503], 10.00th=[10421], 20.00th=[10945], 00:12:35.526 | 30.00th=[11076], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:12:35.526 | 70.00th=[11338], 80.00th=[11469], 90.00th=[11600], 95.00th=[11731], 00:12:35.526 | 99.00th=[12518], 99.50th=[12649], 99.90th=[12911], 99.95th=[13173], 00:12:35.526 | 99.99th=[13173] 00:12:35.526 write: IOPS=5990, BW=23.4MiB/s (24.5MB/s)(23.5MiB/1003msec); 0 zone resets 00:12:35.526 slat (usec): min=5, max=2769, avg=81.19, stdev=334.00 00:12:35.526 clat (usec): min=1837, max=12922, avg=10683.79, stdev=1120.15 00:12:35.526 lat (usec): min=2379, max=12943, avg=10764.98, stdev=1092.73 00:12:35.526 clat percentiles (usec): 00:12:35.526 | 1.00th=[ 6980], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[ 9896], 00:12:35.526 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10814], 60.00th=[11076], 00:12:35.526 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12125], 00:12:35.526 | 99.00th=[12518], 99.50th=[12649], 99.90th=[12911], 99.95th=[12911], 00:12:35.526 | 99.99th=[12911] 00:12:35.526 bw ( KiB/s): min=22472, max=24625, per=30.59%, avg=23548.50, stdev=1522.40, samples=2 00:12:35.526 iops : min= 5618, max= 6156, avg=5887.00, stdev=380.42, samples=2 00:12:35.526 lat (msec) : 2=0.01%, 4=0.21%, 10=14.06%, 20=85.72% 00:12:35.527 cpu : usr=2.99%, sys=10.78%, ctx=762, majf=0, minf=8 00:12:35.527 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:12:35.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.527 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:35.527 issued rwts: total=5632,6008,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.527 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:35.527 00:12:35.527 Run status group 0 (all jobs): 00:12:35.527 READ: bw=71.5MiB/s (75.0MB/s), 11.1MiB/s-25.6MiB/s (11.6MB/s-26.9MB/s), io=71.8MiB (75.3MB), run=1003-1004msec 00:12:35.527 WRITE: bw=75.2MiB/s (78.8MB/s), 12.0MiB/s-25.9MiB/s (12.5MB/s-27.2MB/s), io=75.5MiB (79.1MB), run=1003-1004msec 00:12:35.527 00:12:35.527 Disk stats (read/write): 00:12:35.527 nvme0n1: ios=5682/5965, merge=0/0, ticks=13183/12239, in_queue=25422, util=90.67% 00:12:35.527 nvme0n2: ios=2545/2560, merge=0/0, ticks=14647/11778, in_queue=26425, util=91.04% 00:12:35.527 nvme0n3: ios=3069/3087, merge=0/0, ticks=23428/30140, in_queue=53568, util=89.62% 00:12:35.527 nvme0n4: ios=5083/5120, merge=0/0, ticks=13599/12576, in_queue=26175, util=90.31% 00:12:35.527 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:35.527 [global] 00:12:35.527 thread=1 00:12:35.527 invalidate=1 00:12:35.527 rw=randwrite 00:12:35.527 time_based=1 00:12:35.527 runtime=1 00:12:35.527 ioengine=libaio 00:12:35.527 direct=1 00:12:35.527 bs=4096 00:12:35.527 iodepth=128 00:12:35.527 norandommap=0 00:12:35.527 numjobs=1 00:12:35.527 00:12:35.527 verify_dump=1 00:12:35.527 verify_backlog=512 00:12:35.527 verify_state_save=0 00:12:35.527 do_verify=1 00:12:35.527 verify=crc32c-intel 00:12:35.527 [job0] 00:12:35.527 filename=/dev/nvme0n1 00:12:35.527 [job1] 00:12:35.527 filename=/dev/nvme0n2 00:12:35.527 [job2] 00:12:35.527 filename=/dev/nvme0n3 00:12:35.527 [job3] 00:12:35.527 filename=/dev/nvme0n4 00:12:35.527 Could not set queue depth (nvme0n1) 00:12:35.527 Could not set queue depth (nvme0n2) 00:12:35.527 Could not set queue depth (nvme0n3) 00:12:35.527 Could not set queue depth (nvme0n4) 00:12:35.527 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:35.527 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:35.527 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:35.527 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:35.527 fio-3.35 00:12:35.527 Starting 4 threads 00:12:37.013 00:12:37.013 job0: (groupid=0, jobs=1): err= 0: pid=83230: Wed Nov 27 06:14:56 2024 00:12:37.013 read: IOPS=4001, BW=15.6MiB/s (16.4MB/s)(15.7MiB/1006msec) 00:12:37.013 slat (usec): min=2, max=11071, avg=109.14, stdev=650.85 00:12:37.013 clat (usec): min=3818, max=41259, avg=12385.91, stdev=5321.81 00:12:37.013 lat (usec): min=4425, max=41272, avg=12495.05, stdev=5384.55 00:12:37.013 clat percentiles (usec): 00:12:37.013 | 1.00th=[ 6652], 5.00th=[ 7570], 10.00th=[ 8160], 20.00th=[ 8717], 00:12:37.013 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[11600], 60.00th=[12125], 00:12:37.013 | 70.00th=[12911], 80.00th=[14091], 90.00th=[18482], 95.00th=[23200], 00:12:37.013 | 99.00th=[35914], 99.50th=[38011], 99.90th=[41157], 99.95th=[41157], 00:12:37.013 | 99.99th=[41157] 00:12:37.013 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:12:37.013 slat (usec): min=3, max=8871, avg=131.63, stdev=596.28 00:12:37.013 clat (usec): min=2573, max=44457, avg=18971.19, stdev=9991.98 00:12:37.013 lat (usec): min=2591, max=44467, avg=19102.82, stdev=10062.68 00:12:37.013 clat percentiles (usec): 00:12:37.013 | 1.00th=[ 3458], 5.00th=[ 7308], 10.00th=[ 7832], 20.00th=[10290], 00:12:37.013 | 30.00th=[11731], 40.00th=[15664], 50.00th=[16581], 60.00th=[17433], 00:12:37.013 | 70.00th=[23462], 80.00th=[29754], 90.00th=[34341], 95.00th=[39060], 00:12:37.013 | 99.00th=[42730], 99.50th=[43779], 99.90th=[44303], 99.95th=[44303], 00:12:37.013 | 99.99th=[44303] 00:12:37.013 bw ( KiB/s): min=15120, max=17683, per=27.04%, avg=16401.50, stdev=1812.31, samples=2 00:12:37.013 iops : min= 3780, max= 4420, avg=4100.00, stdev=452.55, samples=2 00:12:37.013 lat (msec) : 4=0.68%, 10=28.61%, 20=49.54%, 50=21.16% 00:12:37.013 cpu : usr=2.59%, sys=7.36%, ctx=505, majf=0, minf=12 00:12:37.013 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:37.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:37.013 issued rwts: total=4026,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.013 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:37.013 job1: (groupid=0, jobs=1): err= 0: pid=83231: Wed Nov 27 06:14:56 2024 00:12:37.013 read: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec) 00:12:37.013 slat (usec): min=4, max=15896, avg=120.07, stdev=747.65 00:12:37.013 clat (usec): min=7559, max=49627, avg=14184.99, stdev=6630.74 00:12:37.013 lat (usec): min=7573, max=49651, avg=14305.06, stdev=6695.06 00:12:37.013 clat percentiles (usec): 00:12:37.013 | 1.00th=[ 8225], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[10552], 00:12:37.013 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11600], 60.00th=[11994], 00:12:37.013 | 70.00th=[12911], 80.00th=[15270], 90.00th=[23725], 95.00th=[27395], 00:12:37.013 | 99.00th=[45876], 99.50th=[47973], 99.90th=[49546], 99.95th=[49546], 00:12:37.013 | 99.99th=[49546] 00:12:37.013 write: IOPS=3392, BW=13.3MiB/s (13.9MB/s)(13.4MiB/1009msec); 0 zone resets 00:12:37.013 slat (usec): min=5, max=20146, avg=179.15, stdev=883.13 00:12:37.013 clat (usec): min=7523, max=69552, avg=24423.19, stdev=12777.55 00:12:37.013 lat (usec): min=8410, max=69569, avg=24602.33, stdev=12838.87 00:12:37.013 clat percentiles (usec): 00:12:37.013 | 1.00th=[ 9634], 5.00th=[12518], 10.00th=[14484], 20.00th=[16581], 00:12:37.014 | 30.00th=[17171], 40.00th=[17433], 50.00th=[17695], 60.00th=[21627], 00:12:37.014 | 70.00th=[24773], 80.00th=[33817], 90.00th=[45351], 95.00th=[49021], 00:12:37.014 | 99.00th=[69731], 99.50th=[69731], 99.90th=[69731], 99.95th=[69731], 00:12:37.014 | 99.99th=[69731] 00:12:37.014 bw ( KiB/s): min=12432, max=13952, per=21.75%, avg=13192.00, stdev=1074.80, samples=2 00:12:37.014 iops : min= 3108, max= 3488, avg=3298.00, stdev=268.70, samples=2 00:12:37.014 lat (msec) : 10=3.31%, 20=66.28%, 50=28.67%, 100=1.74% 00:12:37.014 cpu : usr=1.88%, sys=7.04%, ctx=460, majf=0, minf=11 00:12:37.014 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:12:37.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:37.014 issued rwts: total=3072,3423,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.014 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:37.014 job2: (groupid=0, jobs=1): err= 0: pid=83232: Wed Nov 27 06:14:56 2024 00:12:37.014 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:12:37.014 slat (usec): min=3, max=10744, avg=101.20, stdev=604.13 00:12:37.014 clat (usec): min=6989, max=40639, avg=13362.10, stdev=4462.31 00:12:37.014 lat (usec): min=7000, max=44111, avg=13463.30, stdev=4500.68 00:12:37.014 clat percentiles (usec): 00:12:37.014 | 1.00th=[ 8356], 5.00th=[ 8979], 10.00th=[10814], 20.00th=[11207], 00:12:37.014 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12125], 00:12:37.014 | 70.00th=[12387], 80.00th=[15008], 90.00th=[18744], 95.00th=[21103], 00:12:37.014 | 99.00th=[32113], 99.50th=[36439], 99.90th=[40633], 99.95th=[40633], 00:12:37.014 | 99.99th=[40633] 00:12:37.014 write: IOPS=3709, BW=14.5MiB/s (15.2MB/s)(14.6MiB/1009msec); 0 zone resets 00:12:37.014 slat (usec): min=5, max=18588, avg=165.85, stdev=928.68 00:12:37.014 clat (usec): min=4848, max=50635, avg=21318.55, stdev=9885.55 00:12:37.014 lat (usec): min=5723, max=50681, avg=21484.40, stdev=9945.99 00:12:37.014 clat percentiles (usec): 00:12:37.014 | 1.00th=[ 9241], 5.00th=[10421], 10.00th=[10552], 20.00th=[13042], 00:12:37.014 | 30.00th=[16188], 40.00th=[16581], 50.00th=[17171], 60.00th=[19530], 00:12:37.014 | 70.00th=[23987], 80.00th=[30278], 90.00th=[38536], 95.00th=[42206], 00:12:37.014 | 99.00th=[45351], 99.50th=[45351], 99.90th=[45876], 99.95th=[46924], 00:12:37.014 | 99.99th=[50594] 00:12:37.014 bw ( KiB/s): min=14456, max=14464, per=23.84%, avg=14460.00, stdev= 5.66, samples=2 00:12:37.014 iops : min= 3614, max= 3616, avg=3615.00, stdev= 1.41, samples=2 00:12:37.014 lat (msec) : 10=4.46%, 20=71.87%, 50=23.65%, 100=0.01% 00:12:37.014 cpu : usr=1.79%, sys=7.64%, ctx=417, majf=0, minf=11 00:12:37.014 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:12:37.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:37.014 issued rwts: total=3584,3743,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.014 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:37.014 job3: (groupid=0, jobs=1): err= 0: pid=83233: Wed Nov 27 06:14:56 2024 00:12:37.014 read: IOPS=3591, BW=14.0MiB/s (14.7MB/s)(14.2MiB/1013msec) 00:12:37.014 slat (usec): min=2, max=12233, avg=119.14, stdev=714.89 00:12:37.014 clat (usec): min=4641, max=37853, avg=13621.20, stdev=5404.13 00:12:37.014 lat (usec): min=4648, max=37864, avg=13740.34, stdev=5455.42 00:12:37.014 clat percentiles (usec): 00:12:37.014 | 1.00th=[ 7308], 5.00th=[ 8029], 10.00th=[ 8586], 20.00th=[ 9765], 00:12:37.014 | 30.00th=[10159], 40.00th=[11731], 50.00th=[12649], 60.00th=[13304], 00:12:37.014 | 70.00th=[14353], 80.00th=[15533], 90.00th=[20579], 95.00th=[25035], 00:12:37.014 | 99.00th=[34341], 99.50th=[35390], 99.90th=[38011], 99.95th=[38011], 00:12:37.014 | 99.99th=[38011] 00:12:37.014 write: IOPS=4043, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1013msec); 0 zone resets 00:12:37.014 slat (usec): min=3, max=10920, avg=133.13, stdev=538.60 00:12:37.014 clat (usec): min=3007, max=42426, avg=19263.49, stdev=8080.93 00:12:37.014 lat (usec): min=3020, max=42438, avg=19396.62, stdev=8135.55 00:12:37.014 clat percentiles (usec): 00:12:37.014 | 1.00th=[ 4424], 5.00th=[ 7963], 10.00th=[ 8717], 20.00th=[12387], 00:12:37.014 | 30.00th=[16319], 40.00th=[16909], 50.00th=[17433], 60.00th=[18220], 00:12:37.014 | 70.00th=[22676], 80.00th=[26084], 90.00th=[31589], 95.00th=[34866], 00:12:37.014 | 99.00th=[39584], 99.50th=[40633], 99.90th=[42206], 99.95th=[42206], 00:12:37.014 | 99.99th=[42206] 00:12:37.014 bw ( KiB/s): min=15951, max=16256, per=26.55%, avg=16103.50, stdev=215.67, samples=2 00:12:37.014 iops : min= 3987, max= 4064, avg=4025.50, stdev=54.45, samples=2 00:12:37.014 lat (msec) : 4=0.31%, 10=19.41%, 20=55.88%, 50=24.40% 00:12:37.014 cpu : usr=2.08%, sys=7.31%, ctx=560, majf=0, minf=13 00:12:37.014 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:37.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:37.014 issued rwts: total=3638,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.014 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:37.014 00:12:37.014 Run status group 0 (all jobs): 00:12:37.014 READ: bw=55.2MiB/s (57.9MB/s), 11.9MiB/s-15.6MiB/s (12.5MB/s-16.4MB/s), io=55.9MiB (58.7MB), run=1006-1013msec 00:12:37.014 WRITE: bw=59.2MiB/s (62.1MB/s), 13.3MiB/s-15.9MiB/s (13.9MB/s-16.7MB/s), io=60.0MiB (62.9MB), run=1006-1013msec 00:12:37.014 00:12:37.014 Disk stats (read/write): 00:12:37.014 nvme0n1: ios=3634/3751, merge=0/0, ticks=41839/62878, in_queue=104717, util=90.18% 00:12:37.014 nvme0n2: ios=2603/2983, merge=0/0, ticks=16676/36187, in_queue=52863, util=90.47% 00:12:37.014 nvme0n3: ios=2965/3072, merge=0/0, ticks=19055/33696, in_queue=52751, util=89.55% 00:12:37.014 nvme0n4: ios=3072/3519, merge=0/0, ticks=39748/66470, in_queue=106218, util=89.81% 00:12:37.014 06:14:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:37.014 06:14:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=83247 00:12:37.014 06:14:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:37.014 06:14:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:37.014 [global] 00:12:37.014 thread=1 00:12:37.014 invalidate=1 00:12:37.014 rw=read 00:12:37.014 time_based=1 00:12:37.014 runtime=10 00:12:37.014 ioengine=libaio 00:12:37.014 direct=1 00:12:37.014 bs=4096 00:12:37.014 iodepth=1 00:12:37.014 norandommap=1 00:12:37.014 numjobs=1 00:12:37.014 00:12:37.014 [job0] 00:12:37.014 filename=/dev/nvme0n1 00:12:37.014 [job1] 00:12:37.014 filename=/dev/nvme0n2 00:12:37.014 [job2] 00:12:37.014 filename=/dev/nvme0n3 00:12:37.014 [job3] 00:12:37.014 filename=/dev/nvme0n4 00:12:37.014 Could not set queue depth (nvme0n1) 00:12:37.014 Could not set queue depth (nvme0n2) 00:12:37.014 Could not set queue depth (nvme0n3) 00:12:37.014 Could not set queue depth (nvme0n4) 00:12:37.014 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:37.014 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:37.014 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:37.014 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:37.014 fio-3.35 00:12:37.014 Starting 4 threads 00:12:40.295 06:14:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:40.295 fio: pid=83290, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:40.295 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=82948096, buflen=4096 00:12:40.295 06:14:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:40.295 fio: pid=83289, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:40.295 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=70193152, buflen=4096 00:12:40.295 06:14:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:40.295 06:14:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:40.552 fio: pid=83287, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:40.552 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=62578688, buflen=4096 00:12:40.552 06:14:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:40.552 06:14:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:40.811 fio: pid=83288, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:40.811 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=11132928, buflen=4096 00:12:40.811 06:15:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:40.811 06:15:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:40.811 00:12:40.811 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=83287: Wed Nov 27 06:15:00 2024 00:12:40.811 read: IOPS=4529, BW=17.7MiB/s (18.6MB/s)(59.7MiB/3373msec) 00:12:40.811 slat (usec): min=4, max=12780, avg=16.37, stdev=166.85 00:12:40.811 clat (usec): min=86, max=3882, avg=203.09, stdev=81.52 00:12:40.811 lat (usec): min=93, max=13001, avg=219.46, stdev=189.74 00:12:40.811 clat percentiles (usec): 00:12:40.811 | 1.00th=[ 99], 5.00th=[ 109], 10.00th=[ 118], 20.00th=[ 161], 00:12:40.811 | 30.00th=[ 172], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 188], 00:12:40.811 | 70.00th=[ 198], 80.00th=[ 245], 90.00th=[ 326], 95.00th=[ 367], 00:12:40.811 | 99.00th=[ 429], 99.50th=[ 457], 99.90th=[ 523], 99.95th=[ 570], 00:12:40.811 | 99.99th=[ 1385] 00:12:40.811 bw ( KiB/s): min=13298, max=22432, per=22.71%, avg=18021.67, stdev=4073.68, samples=6 00:12:40.811 iops : min= 3324, max= 5608, avg=4505.33, stdev=1018.54, samples=6 00:12:40.811 lat (usec) : 100=1.44%, 250=78.94%, 500=19.43%, 750=0.15%, 1000=0.01% 00:12:40.811 lat (msec) : 2=0.01%, 4=0.01% 00:12:40.811 cpu : usr=0.86%, sys=5.43%, ctx=15295, majf=0, minf=1 00:12:40.811 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:40.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.811 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.811 issued rwts: total=15279,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:40.811 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:40.811 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=83288: Wed Nov 27 06:15:00 2024 00:12:40.811 read: IOPS=5279, BW=20.6MiB/s (21.6MB/s)(74.6MiB/3618msec) 00:12:40.811 slat (usec): min=4, max=11729, avg=12.17, stdev=149.63 00:12:40.811 clat (usec): min=26, max=2929, avg=176.37, stdev=73.50 00:12:40.811 lat (usec): min=96, max=11946, avg=188.54, stdev=171.27 00:12:40.811 clat percentiles (usec): 00:12:40.811 | 1.00th=[ 97], 5.00th=[ 101], 10.00th=[ 104], 20.00th=[ 112], 00:12:40.811 | 30.00th=[ 127], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 182], 00:12:40.811 | 70.00th=[ 188], 80.00th=[ 198], 90.00th=[ 249], 95.00th=[ 326], 00:12:40.811 | 99.00th=[ 416], 99.50th=[ 465], 99.90th=[ 644], 99.95th=[ 783], 00:12:40.811 | 99.99th=[ 2040] 00:12:40.811 bw ( KiB/s): min=17320, max=24039, per=26.35%, avg=20904.83, stdev=2890.27, samples=6 00:12:40.811 iops : min= 4330, max= 6009, avg=5226.00, stdev=722.32, samples=6 00:12:40.811 lat (usec) : 50=0.01%, 100=3.90%, 250=86.30%, 500=9.49%, 750=0.24% 00:12:40.811 lat (usec) : 1000=0.05% 00:12:40.811 lat (msec) : 2=0.01%, 4=0.01% 00:12:40.811 cpu : usr=0.75%, sys=4.40%, ctx=19122, majf=0, minf=2 00:12:40.811 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:40.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.811 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.811 issued rwts: total=19103,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:40.811 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:40.811 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=83289: Wed Nov 27 06:15:00 2024 00:12:40.811 read: IOPS=5456, BW=21.3MiB/s (22.3MB/s)(66.9MiB/3141msec) 00:12:40.811 slat (usec): min=4, max=8618, avg=14.33, stdev=89.54 00:12:40.811 clat (usec): min=99, max=3436, avg=167.69, stdev=91.55 00:12:40.811 lat (usec): min=106, max=8806, avg=182.01, stdev=135.28 00:12:40.811 clat percentiles (usec): 00:12:40.811 | 1.00th=[ 105], 5.00th=[ 109], 10.00th=[ 111], 20.00th=[ 114], 00:12:40.811 | 30.00th=[ 117], 40.00th=[ 119], 50.00th=[ 123], 60.00th=[ 128], 00:12:40.811 | 70.00th=[ 145], 80.00th=[ 253], 90.00th=[ 322], 95.00th=[ 351], 00:12:40.811 | 99.00th=[ 420], 99.50th=[ 461], 99.90th=[ 635], 99.95th=[ 709], 00:12:40.811 | 99.99th=[ 1029] 00:12:40.811 bw ( KiB/s): min=16632, max=26372, per=28.07%, avg=22276.50, stdev=3837.05, samples=6 00:12:40.811 iops : min= 4158, max= 6593, avg=5569.00, stdev=959.11, samples=6 00:12:40.811 lat (usec) : 100=0.01%, 250=79.67%, 500=20.00%, 750=0.29%, 1000=0.02% 00:12:40.811 lat (msec) : 2=0.01%, 4=0.01% 00:12:40.811 cpu : usr=1.15%, sys=6.24%, ctx=17144, majf=0, minf=2 00:12:40.811 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:40.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.811 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.811 issued rwts: total=17138,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:40.811 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:40.811 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=83290: Wed Nov 27 06:15:00 2024 00:12:40.811 read: IOPS=6933, BW=27.1MiB/s (28.4MB/s)(79.1MiB/2921msec) 00:12:40.811 slat (nsec): min=5534, max=81341, avg=7168.63, stdev=3326.05 00:12:40.811 clat (usec): min=102, max=1759, avg=136.37, stdev=38.29 00:12:40.811 lat (usec): min=109, max=1765, avg=143.54, stdev=39.88 00:12:40.811 clat percentiles (usec): 00:12:40.811 | 1.00th=[ 113], 5.00th=[ 117], 10.00th=[ 120], 20.00th=[ 123], 00:12:40.811 | 30.00th=[ 125], 40.00th=[ 128], 50.00th=[ 130], 60.00th=[ 133], 00:12:40.811 | 70.00th=[ 137], 80.00th=[ 143], 90.00th=[ 153], 95.00th=[ 169], 00:12:40.811 | 99.00th=[ 310], 99.50th=[ 388], 99.90th=[ 537], 99.95th=[ 611], 00:12:40.811 | 99.99th=[ 1696] 00:12:40.811 bw ( KiB/s): min=26856, max=28480, per=34.82%, avg=27627.00, stdev=724.49, samples=5 00:12:40.811 iops : min= 6714, max= 7120, avg=6906.60, stdev=181.00, samples=5 00:12:40.811 lat (usec) : 250=98.83%, 500=1.03%, 750=0.11%, 1000=0.01% 00:12:40.811 lat (msec) : 2=0.01% 00:12:40.811 cpu : usr=0.75%, sys=4.66%, ctx=20262, majf=0, minf=2 00:12:40.811 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:40.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.811 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.811 issued rwts: total=20252,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:40.811 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:40.811 00:12:40.811 Run status group 0 (all jobs): 00:12:40.811 READ: bw=77.5MiB/s (81.2MB/s), 17.7MiB/s-27.1MiB/s (18.6MB/s-28.4MB/s), io=280MiB (294MB), run=2921-3618msec 00:12:40.811 00:12:40.811 Disk stats (read/write): 00:12:40.811 nvme0n1: ios=14124/0, merge=0/0, ticks=2908/0, in_queue=2908, util=95.96% 00:12:40.811 nvme0n2: ios=17319/0, merge=0/0, ticks=3170/0, in_queue=3170, util=95.87% 00:12:40.811 nvme0n3: ios=17117/0, merge=0/0, ticks=2916/0, in_queue=2916, util=96.50% 00:12:40.811 nvme0n4: ios=19998/0, merge=0/0, ticks=2732/0, in_queue=2732, util=96.78% 00:12:41.069 06:15:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:41.069 06:15:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:41.069 06:15:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:41.069 06:15:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:41.328 06:15:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:41.328 06:15:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:41.587 06:15:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:41.587 06:15:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:41.845 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:41.845 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 83247 00:12:41.845 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:41.845 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:41.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.845 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:41.845 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:12:41.845 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:41.845 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.845 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.845 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:41.845 nvmf hotplug test: fio failed as expected 00:12:41.845 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:12:41.845 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:41.845 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:41.845 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.104 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:42.104 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:42.104 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:42.104 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:42.104 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:42.104 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:42.104 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:12:42.104 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:42.104 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:12:42.104 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:42.104 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:42.104 rmmod nvme_tcp 00:12:42.104 rmmod nvme_fabrics 00:12:42.104 rmmod nvme_keyring 00:12:42.104 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:42.104 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:12:42.104 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:12:42.104 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 82765 ']' 00:12:42.104 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 82765 00:12:42.104 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 82765 ']' 00:12:42.104 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 82765 00:12:42.104 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:12:42.104 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:42.104 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82765 00:12:42.104 killing process with pid 82765 00:12:42.104 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:42.104 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:42.104 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82765' 00:12:42.104 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 82765 00:12:42.104 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 82765 00:12:42.363 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:42.363 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:42.363 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:42.363 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:12:42.363 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:12:42.363 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:42.363 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:12:42.363 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:42.363 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:42.363 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:42.363 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:42.363 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:42.363 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:42.363 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:42.363 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:42.363 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:42.363 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:42.363 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:42.363 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:42.363 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:42.363 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:42.363 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:42.363 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:42.363 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.363 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:42.363 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.622 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:12:42.622 ************************************ 00:12:42.622 END TEST nvmf_fio_target 00:12:42.622 ************************************ 00:12:42.622 00:12:42.622 real 0m18.521s 00:12:42.622 user 1m11.525s 00:12:42.622 sys 0m6.795s 00:12:42.622 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:42.622 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.622 06:15:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:42.622 06:15:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:42.622 06:15:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:42.622 06:15:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:42.622 ************************************ 00:12:42.622 START TEST nvmf_bdevio 00:12:42.622 ************************************ 00:12:42.622 06:15:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:42.622 * Looking for test storage... 00:12:42.622 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:42.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.622 --rc genhtml_branch_coverage=1 00:12:42.622 --rc genhtml_function_coverage=1 00:12:42.622 --rc genhtml_legend=1 00:12:42.622 --rc geninfo_all_blocks=1 00:12:42.622 --rc geninfo_unexecuted_blocks=1 00:12:42.622 00:12:42.622 ' 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:42.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.622 --rc genhtml_branch_coverage=1 00:12:42.622 --rc genhtml_function_coverage=1 00:12:42.622 --rc genhtml_legend=1 00:12:42.622 --rc geninfo_all_blocks=1 00:12:42.622 --rc geninfo_unexecuted_blocks=1 00:12:42.622 00:12:42.622 ' 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:42.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.622 --rc genhtml_branch_coverage=1 00:12:42.622 --rc genhtml_function_coverage=1 00:12:42.622 --rc genhtml_legend=1 00:12:42.622 --rc geninfo_all_blocks=1 00:12:42.622 --rc geninfo_unexecuted_blocks=1 00:12:42.622 00:12:42.622 ' 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:42.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.622 --rc genhtml_branch_coverage=1 00:12:42.622 --rc genhtml_function_coverage=1 00:12:42.622 --rc genhtml_legend=1 00:12:42.622 --rc geninfo_all_blocks=1 00:12:42.622 --rc geninfo_unexecuted_blocks=1 00:12:42.622 00:12:42.622 ' 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:42.622 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:12:42.623 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:42.623 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:42.623 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:42.623 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.623 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.623 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.623 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:42.623 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.623 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:12:42.623 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:42.623 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:42.623 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:42.623 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:42.623 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:42.623 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:42.623 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:42.623 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:42.623 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:42.623 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:42.623 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:42.623 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:42.623 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:42.623 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:42.623 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:42.623 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:42.623 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:42.623 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:42.623 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.623 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:42.623 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:42.882 Cannot find device "nvmf_init_br" 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:42.882 Cannot find device "nvmf_init_br2" 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:42.882 Cannot find device "nvmf_tgt_br" 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:42.882 Cannot find device "nvmf_tgt_br2" 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:42.882 Cannot find device "nvmf_init_br" 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:42.882 Cannot find device "nvmf_init_br2" 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:42.882 Cannot find device "nvmf_tgt_br" 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:42.882 Cannot find device "nvmf_tgt_br2" 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:42.882 Cannot find device "nvmf_br" 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:42.882 Cannot find device "nvmf_init_if" 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:42.882 Cannot find device "nvmf_init_if2" 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:42.882 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:12:42.882 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:42.882 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:42.883 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:12:42.883 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:42.883 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:42.883 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:42.883 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:42.883 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:42.883 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:42.883 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:42.883 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:42.883 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:42.883 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:42.883 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:42.883 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:42.883 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:42.883 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:42.883 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:42.883 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:42.883 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:42.883 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:42.883 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:42.883 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:42.883 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:42.883 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:43.141 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:43.141 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:43.141 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:43.141 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:43.141 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:43.141 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:43.141 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:43.141 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:43.141 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:43.141 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:43.141 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:43.141 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:43.141 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:12:43.141 00:12:43.141 --- 10.0.0.3 ping statistics --- 00:12:43.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.141 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:12:43.141 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:43.141 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:43.141 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.030 ms 00:12:43.141 00:12:43.141 --- 10.0.0.4 ping statistics --- 00:12:43.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.141 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:12:43.141 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:43.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:43.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:12:43.141 00:12:43.141 --- 10.0.0.1 ping statistics --- 00:12:43.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.141 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:12:43.141 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:43.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:43.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.041 ms 00:12:43.141 00:12:43.141 --- 10.0.0.2 ping statistics --- 00:12:43.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.141 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:12:43.141 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:43.141 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:12:43.141 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:43.141 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:43.141 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:43.141 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:43.141 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:43.141 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:43.141 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:43.141 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:43.141 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:43.141 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:43.141 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:43.141 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=83661 00:12:43.141 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 83661 00:12:43.141 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:43.141 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 83661 ']' 00:12:43.141 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.141 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:43.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.141 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.141 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:43.141 06:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:43.141 [2024-11-27 06:15:02.576683] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:12:43.141 [2024-11-27 06:15:02.576837] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:43.400 [2024-11-27 06:15:02.692700] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:43.400 [2024-11-27 06:15:02.723096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:43.400 [2024-11-27 06:15:02.752780] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:43.400 [2024-11-27 06:15:02.752812] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:43.400 [2024-11-27 06:15:02.752818] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:43.400 [2024-11-27 06:15:02.752823] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:43.400 [2024-11-27 06:15:02.752828] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:43.400 [2024-11-27 06:15:02.753866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:43.400 [2024-11-27 06:15:02.754190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:43.400 [2024-11-27 06:15:02.754428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:43.400 [2024-11-27 06:15:02.754445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:43.973 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:43.973 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:12:43.973 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:43.973 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:43.973 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:43.973 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:43.973 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:43.973 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.973 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:43.973 [2024-11-27 06:15:03.499857] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:44.231 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.231 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:44.231 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.231 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:44.231 Malloc0 00:12:44.231 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.231 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:44.231 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.231 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:44.231 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.231 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:44.231 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.231 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:44.231 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.231 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:44.231 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.231 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:44.231 [2024-11-27 06:15:03.553673] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:44.231 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.231 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:44.231 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:44.232 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:12:44.232 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:12:44.232 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:44.232 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:44.232 { 00:12:44.232 "params": { 00:12:44.232 "name": "Nvme$subsystem", 00:12:44.232 "trtype": "$TEST_TRANSPORT", 00:12:44.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:44.232 "adrfam": "ipv4", 00:12:44.232 "trsvcid": "$NVMF_PORT", 00:12:44.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:44.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:44.232 "hdgst": ${hdgst:-false}, 00:12:44.232 "ddgst": ${ddgst:-false} 00:12:44.232 }, 00:12:44.232 "method": "bdev_nvme_attach_controller" 00:12:44.232 } 00:12:44.232 EOF 00:12:44.232 )") 00:12:44.232 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:12:44.232 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:12:44.232 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:12:44.232 06:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:44.232 "params": { 00:12:44.232 "name": "Nvme1", 00:12:44.232 "trtype": "tcp", 00:12:44.232 "traddr": "10.0.0.3", 00:12:44.232 "adrfam": "ipv4", 00:12:44.232 "trsvcid": "4420", 00:12:44.232 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:44.232 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:44.232 "hdgst": false, 00:12:44.232 "ddgst": false 00:12:44.232 }, 00:12:44.232 "method": "bdev_nvme_attach_controller" 00:12:44.232 }' 00:12:44.232 [2024-11-27 06:15:03.596741] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:12:44.232 [2024-11-27 06:15:03.596799] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83715 ] 00:12:44.232 [2024-11-27 06:15:03.711405] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:44.232 [2024-11-27 06:15:03.738956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:44.232 [2024-11-27 06:15:03.759989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.490 [2024-11-27 06:15:03.760390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:44.490 [2024-11-27 06:15:03.760506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.490 I/O targets: 00:12:44.490 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:44.490 00:12:44.490 00:12:44.490 CUnit - A unit testing framework for C - Version 2.1-3 00:12:44.490 http://cunit.sourceforge.net/ 00:12:44.490 00:12:44.490 00:12:44.490 Suite: bdevio tests on: Nvme1n1 00:12:44.490 Test: blockdev write read block ...passed 00:12:44.490 Test: blockdev write zeroes read block ...passed 00:12:44.490 Test: blockdev write zeroes read no split ...passed 00:12:44.490 Test: blockdev write zeroes read split ...passed 00:12:44.490 Test: blockdev write zeroes read split partial ...passed 00:12:44.490 Test: blockdev reset ...[2024-11-27 06:15:04.011263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:44.490 [2024-11-27 06:15:04.011339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe71340 (9): Bad file descriptor 00:12:44.748 passed 00:12:44.748 Test: blockdev write read 8 blocks ...[2024-11-27 06:15:04.030019] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:44.748 passed 00:12:44.748 Test: blockdev write read size > 128k ...passed 00:12:44.748 Test: blockdev write read invalid size ...passed 00:12:44.748 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:44.748 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:44.748 Test: blockdev write read max offset ...passed 00:12:44.748 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:44.748 Test: blockdev writev readv 8 blocks ...passed 00:12:44.748 Test: blockdev writev readv 30 x 1block ...passed 00:12:44.748 Test: blockdev writev readv block ...passed 00:12:44.748 Test: blockdev writev readv size > 128k ...passed 00:12:44.748 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:44.748 Test: blockdev comparev and writev ...[2024-11-27 06:15:04.203142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:44.748 [2024-11-27 06:15:04.203181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:44.748 [2024-11-27 06:15:04.203194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:44.748 [2024-11-27 06:15:04.203201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:44.748 [2024-11-27 06:15:04.203408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:44.748 [2024-11-27 06:15:04.203417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:44.748 [2024-11-27 06:15:04.203428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:44.748 [2024-11-27 06:15:04.203434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:44.748 [2024-11-27 06:15:04.203718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:44.748 [2024-11-27 06:15:04.203731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:44.748 [2024-11-27 06:15:04.203742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:44.748 [2024-11-27 06:15:04.203748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:44.748 [2024-11-27 06:15:04.203954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:44.748 [2024-11-27 06:15:04.203963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:44.748 [2024-11-27 06:15:04.203974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:44.748 [2024-11-27 06:15:04.203980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:44.748 passed 00:12:45.007 Test: blockdev nvme passthru rw ...passed 00:12:45.007 Test: blockdev nvme passthru vendor specific ...[2024-11-27 06:15:04.287326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:45.007 [2024-11-27 06:15:04.287355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:45.007 [2024-11-27 06:15:04.287441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:45.008 [2024-11-27 06:15:04.287448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:45.008 [2024-11-27 06:15:04.287534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:45.008 [2024-11-27 06:15:04.287541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:45.008 [2024-11-27 06:15:04.287619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:45.008 [2024-11-27 06:15:04.287627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:45.008 passed 00:12:45.008 Test: blockdev nvme admin passthru ...passed 00:12:45.008 Test: blockdev copy ...passed 00:12:45.008 00:12:45.008 Run Summary: Type Total Ran Passed Failed Inactive 00:12:45.008 suites 1 1 n/a 0 0 00:12:45.008 tests 23 23 23 0 0 00:12:45.008 asserts 152 152 152 0 n/a 00:12:45.008 00:12:45.008 Elapsed time = 0.888 seconds 00:12:45.008 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.008 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.008 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:45.008 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.008 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:45.008 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:45.008 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:45.008 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:12:45.266 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:45.266 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:12:45.266 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:45.266 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:45.266 rmmod nvme_tcp 00:12:45.266 rmmod nvme_fabrics 00:12:45.266 rmmod nvme_keyring 00:12:45.266 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:45.266 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:12:45.266 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:12:45.266 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 83661 ']' 00:12:45.266 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 83661 00:12:45.266 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 83661 ']' 00:12:45.266 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 83661 00:12:45.266 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:12:45.266 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:45.266 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83661 00:12:45.266 killing process with pid 83661 00:12:45.266 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:12:45.266 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:12:45.266 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83661' 00:12:45.266 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 83661 00:12:45.266 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 83661 00:12:45.266 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:45.266 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:45.266 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:45.266 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:12:45.266 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:12:45.266 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:45.266 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:12:45.266 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:45.266 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:45.266 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:45.266 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:45.266 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:45.524 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:45.524 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:45.524 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:45.524 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:45.524 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:45.524 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:45.524 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:45.524 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:45.524 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:45.524 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:45.524 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:45.524 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.524 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:45.524 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.524 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:12:45.524 00:12:45.524 real 0m3.011s 00:12:45.524 user 0m9.755s 00:12:45.524 sys 0m0.698s 00:12:45.524 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:45.524 ************************************ 00:12:45.524 END TEST nvmf_bdevio 00:12:45.524 ************************************ 00:12:45.524 06:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:45.524 06:15:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:45.524 00:12:45.524 real 3m23.384s 00:12:45.524 user 10m37.588s 00:12:45.524 sys 0m46.233s 00:12:45.524 06:15:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:45.524 06:15:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:45.524 ************************************ 00:12:45.524 END TEST nvmf_target_core 00:12:45.524 ************************************ 00:12:45.783 06:15:05 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:45.783 06:15:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:45.783 06:15:05 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:45.783 06:15:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:45.783 ************************************ 00:12:45.783 START TEST nvmf_target_extra 00:12:45.783 ************************************ 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:45.783 * Looking for test storage... 00:12:45.783 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:45.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.783 --rc genhtml_branch_coverage=1 00:12:45.783 --rc genhtml_function_coverage=1 00:12:45.783 --rc genhtml_legend=1 00:12:45.783 --rc geninfo_all_blocks=1 00:12:45.783 --rc geninfo_unexecuted_blocks=1 00:12:45.783 00:12:45.783 ' 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:45.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.783 --rc genhtml_branch_coverage=1 00:12:45.783 --rc genhtml_function_coverage=1 00:12:45.783 --rc genhtml_legend=1 00:12:45.783 --rc geninfo_all_blocks=1 00:12:45.783 --rc geninfo_unexecuted_blocks=1 00:12:45.783 00:12:45.783 ' 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:45.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.783 --rc genhtml_branch_coverage=1 00:12:45.783 --rc genhtml_function_coverage=1 00:12:45.783 --rc genhtml_legend=1 00:12:45.783 --rc geninfo_all_blocks=1 00:12:45.783 --rc geninfo_unexecuted_blocks=1 00:12:45.783 00:12:45.783 ' 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:45.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.783 --rc genhtml_branch_coverage=1 00:12:45.783 --rc genhtml_function_coverage=1 00:12:45.783 --rc genhtml_legend=1 00:12:45.783 --rc geninfo_all_blocks=1 00:12:45.783 --rc geninfo_unexecuted_blocks=1 00:12:45.783 00:12:45.783 ' 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:45.783 06:15:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:12:45.784 06:15:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:12:45.784 06:15:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:45.784 06:15:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:45.784 06:15:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:45.784 06:15:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:45.784 06:15:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:45.784 06:15:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:12:45.784 06:15:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.784 06:15:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.784 06:15:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.784 06:15:05 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.784 06:15:05 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.784 06:15:05 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.784 06:15:05 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:45.784 06:15:05 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.784 06:15:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:12:45.784 06:15:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:45.784 06:15:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:45.784 06:15:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:45.784 06:15:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:45.784 06:15:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:45.784 06:15:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:45.784 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:45.784 06:15:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:45.784 06:15:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:45.784 06:15:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:45.784 06:15:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:45.784 06:15:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:45.784 06:15:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:12:45.784 06:15:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:45.784 06:15:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:45.784 06:15:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:45.784 06:15:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:45.784 ************************************ 00:12:45.784 START TEST nvmf_example 00:12:45.784 ************************************ 00:12:45.784 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:46.042 * Looking for test storage... 00:12:46.042 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:46.042 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:46.042 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:12:46.042 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:46.042 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:46.042 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:46.042 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:46.042 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:46.042 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:12:46.042 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:12:46.042 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:12:46.042 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:12:46.042 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:12:46.042 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:12:46.042 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:12:46.042 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:46.042 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:12:46.042 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:12:46.042 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:46.042 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:46.042 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:12:46.042 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:12:46.042 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:46.042 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:12:46.042 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:12:46.042 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:12:46.042 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:12:46.042 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:46.042 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:12:46.042 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:12:46.042 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:46.042 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:46.042 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:12:46.042 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:46.042 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:46.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.042 --rc genhtml_branch_coverage=1 00:12:46.043 --rc genhtml_function_coverage=1 00:12:46.043 --rc genhtml_legend=1 00:12:46.043 --rc geninfo_all_blocks=1 00:12:46.043 --rc geninfo_unexecuted_blocks=1 00:12:46.043 00:12:46.043 ' 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:46.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.043 --rc genhtml_branch_coverage=1 00:12:46.043 --rc genhtml_function_coverage=1 00:12:46.043 --rc genhtml_legend=1 00:12:46.043 --rc geninfo_all_blocks=1 00:12:46.043 --rc geninfo_unexecuted_blocks=1 00:12:46.043 00:12:46.043 ' 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:46.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.043 --rc genhtml_branch_coverage=1 00:12:46.043 --rc genhtml_function_coverage=1 00:12:46.043 --rc genhtml_legend=1 00:12:46.043 --rc geninfo_all_blocks=1 00:12:46.043 --rc geninfo_unexecuted_blocks=1 00:12:46.043 00:12:46.043 ' 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:46.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.043 --rc genhtml_branch_coverage=1 00:12:46.043 --rc genhtml_function_coverage=1 00:12:46.043 --rc genhtml_legend=1 00:12:46.043 --rc geninfo_all_blocks=1 00:12:46.043 --rc geninfo_unexecuted_blocks=1 00:12:46.043 00:12:46.043 ' 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:46.043 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:46.043 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:46.044 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:46.044 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:46.044 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:46.044 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:46.044 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:46.044 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:46.044 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:46.044 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:46.044 Cannot find device "nvmf_init_br" 00:12:46.044 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # true 00:12:46.044 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:46.044 Cannot find device "nvmf_init_br2" 00:12:46.044 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # true 00:12:46.044 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:46.044 Cannot find device "nvmf_tgt_br" 00:12:46.044 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # true 00:12:46.044 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:46.044 Cannot find device "nvmf_tgt_br2" 00:12:46.044 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # true 00:12:46.044 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:46.044 Cannot find device "nvmf_init_br" 00:12:46.044 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # true 00:12:46.044 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:46.044 Cannot find device "nvmf_init_br2" 00:12:46.044 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # true 00:12:46.044 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:46.044 Cannot find device "nvmf_tgt_br" 00:12:46.044 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # true 00:12:46.044 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:46.044 Cannot find device "nvmf_tgt_br2" 00:12:46.044 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # true 00:12:46.044 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:46.044 Cannot find device "nvmf_br" 00:12:46.044 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # true 00:12:46.044 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:46.044 Cannot find device "nvmf_init_if" 00:12:46.044 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # true 00:12:46.044 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:46.044 Cannot find device "nvmf_init_if2" 00:12:46.044 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # true 00:12:46.044 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:46.044 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:46.044 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # true 00:12:46.044 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:46.044 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:46.044 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # true 00:12:46.044 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:46.044 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:46.044 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:46.044 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:46.302 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:46.302 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:12:46.302 00:12:46.302 --- 10.0.0.3 ping statistics --- 00:12:46.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.302 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:46.302 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:46.302 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:12:46.302 00:12:46.302 --- 10.0.0.4 ping statistics --- 00:12:46.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.302 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:46.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:46.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:12:46.302 00:12:46.302 --- 10.0.0.1 ping statistics --- 00:12:46.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.302 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:46.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:46.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:12:46.302 00:12:46.302 --- 10.0.0.2 ping statistics --- 00:12:46.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.302 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@461 -- # return 0 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=84003 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:46.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 84003 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 84003 ']' 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:46.302 06:15:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:47.236 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:47.236 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:12:47.236 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:12:47.236 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:47.236 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:47.236 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:47.236 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.236 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:47.236 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.236 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:12:47.236 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.236 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:47.494 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.495 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:12:47.495 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:47.495 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.495 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:47.495 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.495 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:12:47.495 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:47.495 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.495 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:47.495 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.495 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:47.495 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.495 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:47.495 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.495 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:12:47.495 06:15:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:57.534 Initializing NVMe Controllers 00:12:57.534 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:12:57.534 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:57.534 Initialization complete. Launching workers. 00:12:57.534 ======================================================== 00:12:57.534 Latency(us) 00:12:57.534 Device Information : IOPS MiB/s Average min max 00:12:57.534 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16182.31 63.21 3954.45 552.92 20190.31 00:12:57.534 ======================================================== 00:12:57.534 Total : 16182.31 63.21 3954.45 552.92 20190.31 00:12:57.534 00:12:57.534 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:57.534 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:57.534 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:57.534 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:12:57.792 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:57.792 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:12:57.792 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:57.792 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:57.792 rmmod nvme_tcp 00:12:57.792 rmmod nvme_fabrics 00:12:57.792 rmmod nvme_keyring 00:12:57.792 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:57.792 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:12:57.792 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:12:57.792 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 84003 ']' 00:12:57.792 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 84003 00:12:57.792 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 84003 ']' 00:12:57.792 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 84003 00:12:57.792 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:12:57.792 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:57.792 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84003 00:12:57.792 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:12:57.792 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:12:57.792 killing process with pid 84003 00:12:57.792 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84003' 00:12:57.792 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 84003 00:12:57.792 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 84003 00:12:57.792 nvmf threads initialize successfully 00:12:57.792 bdev subsystem init successfully 00:12:57.792 created a nvmf target service 00:12:57.792 create targets's poll groups done 00:12:57.792 all subsystems of target started 00:12:57.792 nvmf target is running 00:12:57.792 all subsystems of target stopped 00:12:57.792 destroy targets's poll groups done 00:12:57.792 destroyed the nvmf target service 00:12:57.792 bdev subsystem finish successfully 00:12:57.792 nvmf threads destroy successfully 00:12:57.792 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:57.792 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:57.792 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:57.792 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:12:57.792 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:12:57.792 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:57.792 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:12:57.792 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:57.792 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:57.792 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:57.792 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:57.792 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:58.051 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:58.051 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:58.051 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:58.051 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:58.051 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:58.051 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:58.051 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:58.051 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:58.051 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:58.051 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:58.051 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:58.051 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.051 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:58.051 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.051 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@300 -- # return 0 00:12:58.051 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:58.051 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:58.051 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:58.051 00:12:58.051 real 0m12.284s 00:12:58.051 user 0m44.357s 00:12:58.051 sys 0m1.448s 00:12:58.051 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.051 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:58.051 ************************************ 00:12:58.051 END TEST nvmf_example 00:12:58.051 ************************************ 00:12:58.051 06:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:58.051 06:15:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:58.051 06:15:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.051 06:15:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:58.051 ************************************ 00:12:58.051 START TEST nvmf_filesystem 00:12:58.051 ************************************ 00:12:58.051 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:58.315 * Looking for test storage... 00:12:58.315 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:58.315 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:58.315 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:12:58.315 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:58.315 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:58.315 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:58.315 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:58.315 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:58.315 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:58.315 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:58.315 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:58.315 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:58.315 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:58.315 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:58.315 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:58.315 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:58.315 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:58.315 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:58.315 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:58.315 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:58.315 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:58.315 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:58.315 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:58.315 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:58.315 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:58.315 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:58.315 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:58.315 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:58.315 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:58.315 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:58.315 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:58.315 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:58.315 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:58.315 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:58.315 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:58.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.315 --rc genhtml_branch_coverage=1 00:12:58.315 --rc genhtml_function_coverage=1 00:12:58.315 --rc genhtml_legend=1 00:12:58.315 --rc geninfo_all_blocks=1 00:12:58.315 --rc geninfo_unexecuted_blocks=1 00:12:58.315 00:12:58.315 ' 00:12:58.315 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:58.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.316 --rc genhtml_branch_coverage=1 00:12:58.316 --rc genhtml_function_coverage=1 00:12:58.316 --rc genhtml_legend=1 00:12:58.316 --rc geninfo_all_blocks=1 00:12:58.316 --rc geninfo_unexecuted_blocks=1 00:12:58.316 00:12:58.316 ' 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:58.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.316 --rc genhtml_branch_coverage=1 00:12:58.316 --rc genhtml_function_coverage=1 00:12:58.316 --rc genhtml_legend=1 00:12:58.316 --rc geninfo_all_blocks=1 00:12:58.316 --rc geninfo_unexecuted_blocks=1 00:12:58.316 00:12:58.316 ' 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:58.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.316 --rc genhtml_branch_coverage=1 00:12:58.316 --rc genhtml_function_coverage=1 00:12:58.316 --rc genhtml_legend=1 00:12:58.316 --rc geninfo_all_blocks=1 00:12:58.316 --rc geninfo_unexecuted_blocks=1 00:12:58.316 00:12:58.316 ' 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:58.316 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=y 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=y 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:12:58.317 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:58.317 #define SPDK_CONFIG_H 00:12:58.317 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:58.317 #define SPDK_CONFIG_APPS 1 00:12:58.317 #define SPDK_CONFIG_ARCH native 00:12:58.317 #undef SPDK_CONFIG_ASAN 00:12:58.317 #define SPDK_CONFIG_AVAHI 1 00:12:58.317 #undef SPDK_CONFIG_CET 00:12:58.317 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:58.317 #define SPDK_CONFIG_COVERAGE 1 00:12:58.317 #define SPDK_CONFIG_CROSS_PREFIX 00:12:58.317 #undef SPDK_CONFIG_CRYPTO 00:12:58.317 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:58.317 #undef SPDK_CONFIG_CUSTOMOCF 00:12:58.317 #undef SPDK_CONFIG_DAOS 00:12:58.317 #define SPDK_CONFIG_DAOS_DIR 00:12:58.317 #define SPDK_CONFIG_DEBUG 1 00:12:58.317 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:58.317 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:12:58.317 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:12:58.317 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:12:58.317 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:58.317 #undef SPDK_CONFIG_DPDK_UADK 00:12:58.317 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:58.317 #define SPDK_CONFIG_EXAMPLES 1 00:12:58.317 #undef SPDK_CONFIG_FC 00:12:58.317 #define SPDK_CONFIG_FC_PATH 00:12:58.317 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:58.317 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:58.317 #define SPDK_CONFIG_FSDEV 1 00:12:58.317 #undef SPDK_CONFIG_FUSE 00:12:58.317 #undef SPDK_CONFIG_FUZZER 00:12:58.317 #define SPDK_CONFIG_FUZZER_LIB 00:12:58.317 #define SPDK_CONFIG_GOLANG 1 00:12:58.317 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:58.317 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:58.317 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:58.317 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:58.317 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:58.317 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:58.317 #undef SPDK_CONFIG_HAVE_LZ4 00:12:58.317 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:58.317 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:58.317 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:58.317 #define SPDK_CONFIG_IDXD 1 00:12:58.317 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:58.317 #undef SPDK_CONFIG_IPSEC_MB 00:12:58.317 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:58.317 #define SPDK_CONFIG_ISAL 1 00:12:58.317 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:58.317 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:58.317 #define SPDK_CONFIG_LIBDIR 00:12:58.317 #undef SPDK_CONFIG_LTO 00:12:58.317 #define SPDK_CONFIG_MAX_LCORES 128 00:12:58.317 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:58.317 #define SPDK_CONFIG_NVME_CUSE 1 00:12:58.317 #undef SPDK_CONFIG_OCF 00:12:58.317 #define SPDK_CONFIG_OCF_PATH 00:12:58.317 #define SPDK_CONFIG_OPENSSL_PATH 00:12:58.317 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:58.317 #define SPDK_CONFIG_PGO_DIR 00:12:58.317 #undef SPDK_CONFIG_PGO_USE 00:12:58.317 #define SPDK_CONFIG_PREFIX /usr/local 00:12:58.317 #undef SPDK_CONFIG_RAID5F 00:12:58.317 #undef SPDK_CONFIG_RBD 00:12:58.317 #define SPDK_CONFIG_RDMA 1 00:12:58.318 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:58.318 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:58.318 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:58.318 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:58.318 #define SPDK_CONFIG_SHARED 1 00:12:58.318 #undef SPDK_CONFIG_SMA 00:12:58.318 #define SPDK_CONFIG_TESTS 1 00:12:58.318 #undef SPDK_CONFIG_TSAN 00:12:58.318 #define SPDK_CONFIG_UBLK 1 00:12:58.318 #define SPDK_CONFIG_UBSAN 1 00:12:58.318 #undef SPDK_CONFIG_UNIT_TESTS 00:12:58.318 #undef SPDK_CONFIG_URING 00:12:58.318 #define SPDK_CONFIG_URING_PATH 00:12:58.318 #undef SPDK_CONFIG_URING_ZNS 00:12:58.318 #define SPDK_CONFIG_USDT 1 00:12:58.318 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:58.318 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:58.318 #undef SPDK_CONFIG_VFIO_USER 00:12:58.318 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:58.318 #define SPDK_CONFIG_VHOST 1 00:12:58.318 #define SPDK_CONFIG_VIRTIO 1 00:12:58.318 #undef SPDK_CONFIG_VTUNE 00:12:58.318 #define SPDK_CONFIG_VTUNE_DIR 00:12:58.318 #define SPDK_CONFIG_WERROR 1 00:12:58.318 #define SPDK_CONFIG_WPDK_DIR 00:12:58.318 #undef SPDK_CONFIG_XNVME 00:12:58.318 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:58.318 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /home/vagrant/spdk_repo/dpdk/build 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : main 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:58.319 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 1 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:58.320 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 84283 ]] 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 84283 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.7AyN1R 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.7AyN1R/tests/target /tmp/spdk.7AyN1R 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13239762944 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6347227136 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:58.321 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6255362048 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=2486018048 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=20140032 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13239762944 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6347227136 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6265237504 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265397248 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=159744 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=93579780096 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6122999808 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:58.322 * Looking for test storage... 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/home 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=13239762944 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:58.322 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:58.322 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:58.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.582 --rc genhtml_branch_coverage=1 00:12:58.582 --rc genhtml_function_coverage=1 00:12:58.582 --rc genhtml_legend=1 00:12:58.582 --rc geninfo_all_blocks=1 00:12:58.582 --rc geninfo_unexecuted_blocks=1 00:12:58.582 00:12:58.582 ' 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:58.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.582 --rc genhtml_branch_coverage=1 00:12:58.582 --rc genhtml_function_coverage=1 00:12:58.582 --rc genhtml_legend=1 00:12:58.582 --rc geninfo_all_blocks=1 00:12:58.582 --rc geninfo_unexecuted_blocks=1 00:12:58.582 00:12:58.582 ' 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:58.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.582 --rc genhtml_branch_coverage=1 00:12:58.582 --rc genhtml_function_coverage=1 00:12:58.582 --rc genhtml_legend=1 00:12:58.582 --rc geninfo_all_blocks=1 00:12:58.582 --rc geninfo_unexecuted_blocks=1 00:12:58.582 00:12:58.582 ' 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:58.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.582 --rc genhtml_branch_coverage=1 00:12:58.582 --rc genhtml_function_coverage=1 00:12:58.582 --rc genhtml_legend=1 00:12:58.582 --rc geninfo_all_blocks=1 00:12:58.582 --rc geninfo_unexecuted_blocks=1 00:12:58.582 00:12:58.582 ' 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:58.582 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:58.583 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:58.583 Cannot find device "nvmf_init_br" 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:58.583 Cannot find device "nvmf_init_br2" 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:58.583 Cannot find device "nvmf_tgt_br" 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # true 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:58.583 Cannot find device "nvmf_tgt_br2" 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # true 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:58.583 Cannot find device "nvmf_init_br" 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # true 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:58.583 Cannot find device "nvmf_init_br2" 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # true 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:58.583 Cannot find device "nvmf_tgt_br" 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # true 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:58.583 Cannot find device "nvmf_tgt_br2" 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # true 00:12:58.583 06:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:58.583 Cannot find device "nvmf_br" 00:12:58.583 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # true 00:12:58.583 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:58.583 Cannot find device "nvmf_init_if" 00:12:58.583 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # true 00:12:58.583 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:58.583 Cannot find device "nvmf_init_if2" 00:12:58.583 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # true 00:12:58.583 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:58.583 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:58.583 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # true 00:12:58.583 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:58.583 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:58.583 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # true 00:12:58.583 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:58.583 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:58.583 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:58.583 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:58.583 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:58.583 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:58.583 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:58.583 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:58.583 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:58.583 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:58.843 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:58.843 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:12:58.843 00:12:58.843 --- 10.0.0.3 ping statistics --- 00:12:58.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.843 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:58.843 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:58.843 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:12:58.843 00:12:58.843 --- 10.0.0.4 ping statistics --- 00:12:58.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.843 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:58.843 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:58.843 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:12:58.843 00:12:58.843 --- 10.0.0.1 ping statistics --- 00:12:58.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.843 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:58.843 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:58.843 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:12:58.843 00:12:58.843 --- 10.0.0.2 ping statistics --- 00:12:58.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.843 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@461 -- # return 0 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:58.843 ************************************ 00:12:58.843 START TEST nvmf_filesystem_no_in_capsule 00:12:58.843 ************************************ 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=84461 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 84461 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 84461 ']' 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:58.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:58.843 06:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:58.843 [2024-11-27 06:15:18.272294] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:12:58.843 [2024-11-27 06:15:18.272349] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.102 [2024-11-27 06:15:18.387470] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:59.102 [2024-11-27 06:15:18.413965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:59.102 [2024-11-27 06:15:18.432748] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.102 [2024-11-27 06:15:18.432782] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.102 [2024-11-27 06:15:18.432788] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:59.102 [2024-11-27 06:15:18.432793] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:59.102 [2024-11-27 06:15:18.432797] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.102 [2024-11-27 06:15:18.433458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.102 [2024-11-27 06:15:18.433671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:59.102 [2024-11-27 06:15:18.434061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:59.102 [2024-11-27 06:15:18.434233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.667 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:59.667 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:59.667 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:59.668 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:59.668 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:59.668 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.668 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:59.668 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:59.668 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.668 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:59.668 [2024-11-27 06:15:19.190556] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:59.924 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.924 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:59.924 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.924 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:59.924 Malloc1 00:12:59.924 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.924 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:59.924 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.924 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:59.924 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.925 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:59.925 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.925 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:59.925 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.925 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:59.925 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.925 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:59.925 [2024-11-27 06:15:19.303977] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:59.925 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.925 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:59.925 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:59.925 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:59.925 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:59.925 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:59.925 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:59.925 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.925 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:59.925 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.925 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:59.925 { 00:12:59.925 "aliases": [ 00:12:59.925 "6c20ea81-b971-4519-baf9-6e7a8a059ab3" 00:12:59.925 ], 00:12:59.925 "assigned_rate_limits": { 00:12:59.925 "r_mbytes_per_sec": 0, 00:12:59.925 "rw_ios_per_sec": 0, 00:12:59.925 "rw_mbytes_per_sec": 0, 00:12:59.925 "w_mbytes_per_sec": 0 00:12:59.925 }, 00:12:59.925 "block_size": 512, 00:12:59.925 "claim_type": "exclusive_write", 00:12:59.925 "claimed": true, 00:12:59.925 "driver_specific": {}, 00:12:59.925 "memory_domains": [ 00:12:59.925 { 00:12:59.925 "dma_device_id": "system", 00:12:59.925 "dma_device_type": 1 00:12:59.925 }, 00:12:59.925 { 00:12:59.925 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.925 "dma_device_type": 2 00:12:59.925 } 00:12:59.925 ], 00:12:59.925 "name": "Malloc1", 00:12:59.925 "num_blocks": 1048576, 00:12:59.925 "product_name": "Malloc disk", 00:12:59.925 "supported_io_types": { 00:12:59.925 "abort": true, 00:12:59.925 "compare": false, 00:12:59.925 "compare_and_write": false, 00:12:59.925 "copy": true, 00:12:59.925 "flush": true, 00:12:59.925 "get_zone_info": false, 00:12:59.925 "nvme_admin": false, 00:12:59.925 "nvme_io": false, 00:12:59.925 "nvme_io_md": false, 00:12:59.925 "nvme_iov_md": false, 00:12:59.925 "read": true, 00:12:59.925 "reset": true, 00:12:59.925 "seek_data": false, 00:12:59.925 "seek_hole": false, 00:12:59.925 "unmap": true, 00:12:59.925 "write": true, 00:12:59.925 "write_zeroes": true, 00:12:59.925 "zcopy": true, 00:12:59.925 "zone_append": false, 00:12:59.925 "zone_management": false 00:12:59.925 }, 00:12:59.925 "uuid": "6c20ea81-b971-4519-baf9-6e7a8a059ab3", 00:12:59.925 "zoned": false 00:12:59.925 } 00:12:59.925 ]' 00:12:59.925 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:59.925 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:59.925 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:59.925 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:59.925 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:59.925 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:59.925 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:59.925 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:13:00.182 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:00.182 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:13:00.182 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:00.182 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:00.182 06:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:13:02.152 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:02.152 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:02.152 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:02.152 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:02.152 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:02.152 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:13:02.152 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:02.152 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:02.152 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:02.152 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:02.152 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:02.152 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:02.152 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:02.152 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:02.152 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:02.152 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:02.152 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:02.152 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:02.410 06:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:03.346 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:13:03.346 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:03.346 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:03.346 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:03.346 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:03.346 ************************************ 00:13:03.346 START TEST filesystem_ext4 00:13:03.346 ************************************ 00:13:03.346 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:03.346 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:03.346 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:03.346 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:03.346 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:13:03.346 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:03.346 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:13:03.346 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:13:03.346 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:13:03.346 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:13:03.346 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:03.346 mke2fs 1.47.0 (5-Feb-2023) 00:13:03.346 Discarding device blocks: 0/522240 done 00:13:03.346 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:03.346 Filesystem UUID: 3a11e41e-9c1e-4811-9ca4-ceadb4ad653b 00:13:03.346 Superblock backups stored on blocks: 00:13:03.346 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:03.346 00:13:03.346 Allocating group tables: 0/64 done 00:13:03.346 Writing inode tables: 0/64 done 00:13:03.346 Creating journal (8192 blocks): done 00:13:03.346 Writing superblocks and filesystem accounting information: 0/64 done 00:13:03.346 00:13:03.346 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:13:03.346 06:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:09.901 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:09.901 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:13:09.901 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:09.901 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:13:09.901 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:09.901 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:09.901 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 84461 00:13:09.901 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:09.901 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:09.901 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:09.901 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:09.901 ************************************ 00:13:09.901 END TEST filesystem_ext4 00:13:09.902 ************************************ 00:13:09.902 00:13:09.902 real 0m5.607s 00:13:09.902 user 0m0.019s 00:13:09.902 sys 0m0.051s 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:09.902 ************************************ 00:13:09.902 START TEST filesystem_btrfs 00:13:09.902 ************************************ 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:09.902 btrfs-progs v6.8.1 00:13:09.902 See https://btrfs.readthedocs.io for more information. 00:13:09.902 00:13:09.902 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:09.902 NOTE: several default settings have changed in version 5.15, please make sure 00:13:09.902 this does not affect your deployments: 00:13:09.902 - DUP for metadata (-m dup) 00:13:09.902 - enabled no-holes (-O no-holes) 00:13:09.902 - enabled free-space-tree (-R free-space-tree) 00:13:09.902 00:13:09.902 Label: (null) 00:13:09.902 UUID: 1f50537e-f781-438b-8e13-05640c0c3311 00:13:09.902 Node size: 16384 00:13:09.902 Sector size: 4096 (CPU page size: 4096) 00:13:09.902 Filesystem size: 510.00MiB 00:13:09.902 Block group profiles: 00:13:09.902 Data: single 8.00MiB 00:13:09.902 Metadata: DUP 32.00MiB 00:13:09.902 System: DUP 8.00MiB 00:13:09.902 SSD detected: yes 00:13:09.902 Zoned device: no 00:13:09.902 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:09.902 Checksum: crc32c 00:13:09.902 Number of devices: 1 00:13:09.902 Devices: 00:13:09.902 ID SIZE PATH 00:13:09.902 1 510.00MiB /dev/nvme0n1p1 00:13:09.902 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 84461 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:09.902 ************************************ 00:13:09.902 END TEST filesystem_btrfs 00:13:09.902 ************************************ 00:13:09.902 00:13:09.902 real 0m0.141s 00:13:09.902 user 0m0.004s 00:13:09.902 sys 0m0.056s 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:09.902 ************************************ 00:13:09.902 START TEST filesystem_xfs 00:13:09.902 ************************************ 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:13:09.902 06:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:09.902 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:09.902 = sectsz=512 attr=2, projid32bit=1 00:13:09.902 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:09.902 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:09.902 data = bsize=4096 blocks=130560, imaxpct=25 00:13:09.902 = sunit=0 swidth=0 blks 00:13:09.902 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:09.902 log =internal log bsize=4096 blocks=16384, version=2 00:13:09.902 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:09.902 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:09.902 Discarding blocks...Done. 00:13:09.902 06:15:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:13:09.902 06:15:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 84461 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:12.424 ************************************ 00:13:12.424 END TEST filesystem_xfs 00:13:12.424 ************************************ 00:13:12.424 00:13:12.424 real 0m2.923s 00:13:12.424 user 0m0.023s 00:13:12.424 sys 0m0.040s 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:12.424 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 84461 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 84461 ']' 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 84461 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84461 00:13:12.424 killing process with pid 84461 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84461' 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 84461 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 84461 00:13:12.424 ************************************ 00:13:12.424 END TEST nvmf_filesystem_no_in_capsule 00:13:12.424 ************************************ 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:12.424 00:13:12.424 real 0m13.550s 00:13:12.424 user 0m52.621s 00:13:12.424 sys 0m1.348s 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:12.424 ************************************ 00:13:12.424 START TEST nvmf_filesystem_in_capsule 00:13:12.424 ************************************ 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:12.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=84827 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 84827 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 84827 ']' 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:12.424 06:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:12.424 [2024-11-27 06:15:31.867940] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:13:12.424 [2024-11-27 06:15:31.867989] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.682 [2024-11-27 06:15:31.981636] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:12.682 [2024-11-27 06:15:32.005972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:12.682 [2024-11-27 06:15:32.023845] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:12.682 [2024-11-27 06:15:32.024001] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:12.682 [2024-11-27 06:15:32.024068] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:12.682 [2024-11-27 06:15:32.024090] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:12.682 [2024-11-27 06:15:32.024102] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:12.682 [2024-11-27 06:15:32.024748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:12.682 [2024-11-27 06:15:32.024981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:12.682 [2024-11-27 06:15:32.025047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.682 [2024-11-27 06:15:32.025051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:12.682 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:12.682 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:13:12.682 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:12.682 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:12.682 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:12.683 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:12.683 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:12.683 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:13:12.683 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.683 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:12.683 [2024-11-27 06:15:32.133067] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:12.683 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.683 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:12.683 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.683 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:12.942 Malloc1 00:13:12.942 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.942 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:12.942 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.942 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:12.942 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.942 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:12.942 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.942 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:12.942 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.942 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:12.942 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.942 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:12.942 [2024-11-27 06:15:32.246685] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:12.942 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.942 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:12.942 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:13:12.942 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:13:12.942 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:13:12.942 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:13:12.942 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:12.942 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.942 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:12.942 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.942 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:13:12.942 { 00:13:12.942 "aliases": [ 00:13:12.942 "e6641318-1072-4ada-96cd-cca1e54e7415" 00:13:12.942 ], 00:13:12.942 "assigned_rate_limits": { 00:13:12.942 "r_mbytes_per_sec": 0, 00:13:12.942 "rw_ios_per_sec": 0, 00:13:12.942 "rw_mbytes_per_sec": 0, 00:13:12.942 "w_mbytes_per_sec": 0 00:13:12.942 }, 00:13:12.942 "block_size": 512, 00:13:12.942 "claim_type": "exclusive_write", 00:13:12.942 "claimed": true, 00:13:12.942 "driver_specific": {}, 00:13:12.942 "memory_domains": [ 00:13:12.942 { 00:13:12.942 "dma_device_id": "system", 00:13:12.942 "dma_device_type": 1 00:13:12.942 }, 00:13:12.942 { 00:13:12.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.942 "dma_device_type": 2 00:13:12.942 } 00:13:12.942 ], 00:13:12.942 "name": "Malloc1", 00:13:12.942 "num_blocks": 1048576, 00:13:12.942 "product_name": "Malloc disk", 00:13:12.942 "supported_io_types": { 00:13:12.942 "abort": true, 00:13:12.942 "compare": false, 00:13:12.942 "compare_and_write": false, 00:13:12.942 "copy": true, 00:13:12.942 "flush": true, 00:13:12.942 "get_zone_info": false, 00:13:12.942 "nvme_admin": false, 00:13:12.942 "nvme_io": false, 00:13:12.942 "nvme_io_md": false, 00:13:12.942 "nvme_iov_md": false, 00:13:12.942 "read": true, 00:13:12.942 "reset": true, 00:13:12.942 "seek_data": false, 00:13:12.942 "seek_hole": false, 00:13:12.942 "unmap": true, 00:13:12.942 "write": true, 00:13:12.942 "write_zeroes": true, 00:13:12.942 "zcopy": true, 00:13:12.942 "zone_append": false, 00:13:12.942 "zone_management": false 00:13:12.942 }, 00:13:12.942 "uuid": "e6641318-1072-4ada-96cd-cca1e54e7415", 00:13:12.942 "zoned": false 00:13:12.942 } 00:13:12.942 ]' 00:13:12.942 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:13:12.942 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:13:12.942 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:13:12.942 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:13:12.942 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:13:12.942 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:13:12.942 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:12.942 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:13:13.201 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:13.201 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:13:13.201 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:13.201 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:13.201 06:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:13:15.138 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:15.138 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:15.138 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:15.138 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:15.138 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:15.138 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:13:15.138 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:15.138 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:15.138 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:15.138 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:15.138 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:15.138 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:15.138 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:15.138 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:15.138 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:15.138 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:15.138 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:15.138 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:15.138 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:16.072 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:16.072 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:16.072 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:16.072 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:16.072 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:16.072 ************************************ 00:13:16.072 START TEST filesystem_in_capsule_ext4 00:13:16.072 ************************************ 00:13:16.072 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:16.072 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:16.072 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:16.072 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:16.072 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:13:16.072 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:16.072 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:13:16.072 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:13:16.072 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:13:16.072 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:13:16.072 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:16.072 mke2fs 1.47.0 (5-Feb-2023) 00:13:16.330 Discarding device blocks: 0/522240 done 00:13:16.330 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:16.330 Filesystem UUID: 3ddd6a6c-4806-4059-ab8c-b1ec57b47b9b 00:13:16.330 Superblock backups stored on blocks: 00:13:16.330 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:16.330 00:13:16.330 Allocating group tables: 0/64 done 00:13:16.330 Writing inode tables: 0/64 done 00:13:16.330 Creating journal (8192 blocks): done 00:13:16.330 Writing superblocks and filesystem accounting information: 0/64 done 00:13:16.330 00:13:16.330 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:13:16.330 06:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:21.617 06:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:21.617 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:21.617 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:21.617 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:21.617 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:21.617 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:21.617 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 84827 00:13:21.617 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:21.617 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:21.617 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:21.617 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:21.617 ************************************ 00:13:21.617 END TEST filesystem_in_capsule_ext4 00:13:21.617 ************************************ 00:13:21.617 00:13:21.617 real 0m5.531s 00:13:21.617 user 0m0.012s 00:13:21.617 sys 0m0.053s 00:13:21.617 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:21.617 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:21.617 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:21.617 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:21.617 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:21.617 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:21.875 ************************************ 00:13:21.875 START TEST filesystem_in_capsule_btrfs 00:13:21.875 ************************************ 00:13:21.875 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:21.875 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:21.875 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:21.875 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:21.875 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:13:21.875 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:21.875 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:13:21.875 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:13:21.875 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:13:21.875 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:13:21.875 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:21.875 btrfs-progs v6.8.1 00:13:21.875 See https://btrfs.readthedocs.io for more information. 00:13:21.875 00:13:21.875 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:21.875 NOTE: several default settings have changed in version 5.15, please make sure 00:13:21.875 this does not affect your deployments: 00:13:21.875 - DUP for metadata (-m dup) 00:13:21.875 - enabled no-holes (-O no-holes) 00:13:21.875 - enabled free-space-tree (-R free-space-tree) 00:13:21.875 00:13:21.875 Label: (null) 00:13:21.875 UUID: 89f9c8f6-346d-433a-8142-757ce0355ee3 00:13:21.875 Node size: 16384 00:13:21.875 Sector size: 4096 (CPU page size: 4096) 00:13:21.875 Filesystem size: 510.00MiB 00:13:21.875 Block group profiles: 00:13:21.875 Data: single 8.00MiB 00:13:21.875 Metadata: DUP 32.00MiB 00:13:21.875 System: DUP 8.00MiB 00:13:21.875 SSD detected: yes 00:13:21.875 Zoned device: no 00:13:21.875 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:21.875 Checksum: crc32c 00:13:21.875 Number of devices: 1 00:13:21.875 Devices: 00:13:21.875 ID SIZE PATH 00:13:21.875 1 510.00MiB /dev/nvme0n1p1 00:13:21.875 00:13:21.875 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:13:21.875 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:21.875 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:21.876 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:21.876 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:21.876 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:21.876 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:21.876 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:21.876 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 84827 00:13:21.876 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:21.876 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:21.876 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:21.876 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:21.876 ************************************ 00:13:21.876 END TEST filesystem_in_capsule_btrfs 00:13:21.876 ************************************ 00:13:21.876 00:13:21.876 real 0m0.148s 00:13:21.876 user 0m0.007s 00:13:21.876 sys 0m0.055s 00:13:21.876 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:21.876 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:21.876 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:21.876 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:21.876 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:21.876 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:21.876 ************************************ 00:13:21.876 START TEST filesystem_in_capsule_xfs 00:13:21.876 ************************************ 00:13:21.876 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:13:21.876 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:21.876 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:21.876 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:21.876 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:13:21.876 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:21.876 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:13:21.876 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:13:21.876 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:13:21.876 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:13:21.876 06:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:21.876 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:21.876 = sectsz=512 attr=2, projid32bit=1 00:13:21.876 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:21.876 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:21.876 data = bsize=4096 blocks=130560, imaxpct=25 00:13:21.876 = sunit=0 swidth=0 blks 00:13:21.876 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:21.876 log =internal log bsize=4096 blocks=16384, version=2 00:13:21.876 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:21.876 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:22.810 Discarding blocks...Done. 00:13:22.810 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:13:22.810 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:24.709 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:24.709 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:13:24.709 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:24.709 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:13:24.709 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:13:24.709 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:24.709 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 84827 00:13:24.709 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:24.709 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:24.709 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:24.709 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:24.709 ************************************ 00:13:24.709 END TEST filesystem_in_capsule_xfs 00:13:24.709 ************************************ 00:13:24.709 00:13:24.709 real 0m2.533s 00:13:24.709 user 0m0.016s 00:13:24.709 sys 0m0.036s 00:13:24.709 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:24.709 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:24.709 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:24.709 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:24.709 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:24.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.709 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:24.709 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:13:24.709 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:24.709 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:24.709 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:24.709 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:24.709 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:13:24.709 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:24.709 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.709 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:24.709 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.709 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:24.709 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 84827 00:13:24.709 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 84827 ']' 00:13:24.709 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 84827 00:13:24.709 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:13:24.710 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:24.710 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84827 00:13:24.710 killing process with pid 84827 00:13:24.710 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:24.710 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:24.710 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84827' 00:13:24.710 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 84827 00:13:24.710 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 84827 00:13:24.710 ************************************ 00:13:24.710 END TEST nvmf_filesystem_in_capsule 00:13:24.710 ************************************ 00:13:24.710 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:24.710 00:13:24.710 real 0m12.349s 00:13:24.710 user 0m47.687s 00:13:24.710 sys 0m1.339s 00:13:24.710 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:24.710 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:24.710 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:13:24.710 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:24.710 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:13:24.969 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:24.969 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:13:24.969 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:24.969 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:24.969 rmmod nvme_tcp 00:13:24.969 rmmod nvme_fabrics 00:13:24.969 rmmod nvme_keyring 00:13:24.969 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:24.969 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:13:24.969 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:13:24.969 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:13:24.969 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:24.969 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:24.969 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:24.969 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:13:24.969 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:24.969 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:13:24.969 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:13:24.969 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:24.969 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:24.969 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:24.969 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:24.969 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:24.969 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:24.969 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:24.969 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:24.969 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:24.969 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:24.969 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:24.969 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:24.969 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:24.969 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:24.969 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:24.969 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:24.969 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.969 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:24.969 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@300 -- # return 0 00:13:25.230 00:13:25.230 real 0m26.925s 00:13:25.230 user 1m40.634s 00:13:25.230 sys 0m3.086s 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:25.230 ************************************ 00:13:25.230 END TEST nvmf_filesystem 00:13:25.230 ************************************ 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:25.230 ************************************ 00:13:25.230 START TEST nvmf_target_discovery 00:13:25.230 ************************************ 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:25.230 * Looking for test storage... 00:13:25.230 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:25.230 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:25.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.231 --rc genhtml_branch_coverage=1 00:13:25.231 --rc genhtml_function_coverage=1 00:13:25.231 --rc genhtml_legend=1 00:13:25.231 --rc geninfo_all_blocks=1 00:13:25.231 --rc geninfo_unexecuted_blocks=1 00:13:25.231 00:13:25.231 ' 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:25.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.231 --rc genhtml_branch_coverage=1 00:13:25.231 --rc genhtml_function_coverage=1 00:13:25.231 --rc genhtml_legend=1 00:13:25.231 --rc geninfo_all_blocks=1 00:13:25.231 --rc geninfo_unexecuted_blocks=1 00:13:25.231 00:13:25.231 ' 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:25.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.231 --rc genhtml_branch_coverage=1 00:13:25.231 --rc genhtml_function_coverage=1 00:13:25.231 --rc genhtml_legend=1 00:13:25.231 --rc geninfo_all_blocks=1 00:13:25.231 --rc geninfo_unexecuted_blocks=1 00:13:25.231 00:13:25.231 ' 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:25.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.231 --rc genhtml_branch_coverage=1 00:13:25.231 --rc genhtml_function_coverage=1 00:13:25.231 --rc genhtml_legend=1 00:13:25.231 --rc geninfo_all_blocks=1 00:13:25.231 --rc geninfo_unexecuted_blocks=1 00:13:25.231 00:13:25.231 ' 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:25.231 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:25.231 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:25.232 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:25.232 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:25.232 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:25.232 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:25.232 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:25.232 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:25.232 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:25.232 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:25.232 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:25.232 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:25.232 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:25.232 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:25.232 Cannot find device "nvmf_init_br" 00:13:25.232 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:13:25.232 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:25.232 Cannot find device "nvmf_init_br2" 00:13:25.232 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:13:25.232 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:25.232 Cannot find device "nvmf_tgt_br" 00:13:25.232 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # true 00:13:25.232 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:25.232 Cannot find device "nvmf_tgt_br2" 00:13:25.491 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # true 00:13:25.491 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:25.491 Cannot find device "nvmf_init_br" 00:13:25.491 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # true 00:13:25.491 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:25.491 Cannot find device "nvmf_init_br2" 00:13:25.491 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # true 00:13:25.491 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:25.491 Cannot find device "nvmf_tgt_br" 00:13:25.491 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # true 00:13:25.491 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:25.491 Cannot find device "nvmf_tgt_br2" 00:13:25.491 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # true 00:13:25.491 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:25.491 Cannot find device "nvmf_br" 00:13:25.491 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # true 00:13:25.491 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:25.491 Cannot find device "nvmf_init_if" 00:13:25.491 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # true 00:13:25.491 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:25.491 Cannot find device "nvmf_init_if2" 00:13:25.491 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # true 00:13:25.491 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:25.491 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:25.491 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # true 00:13:25.491 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:25.491 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:25.491 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # true 00:13:25.491 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:25.491 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:25.491 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:25.491 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:25.491 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:25.491 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:25.491 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:25.491 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:25.491 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:25.491 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:25.491 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:25.491 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:25.491 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:25.491 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:25.491 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:25.491 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:25.491 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:25.491 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:25.491 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:25.492 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:25.492 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:25.492 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:25.492 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:25.492 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:25.492 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:25.492 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:25.492 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:25.492 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:25.492 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:25.492 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:25.492 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:25.492 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:25.492 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:25.492 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:25.492 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:13:25.492 00:13:25.492 --- 10.0.0.3 ping statistics --- 00:13:25.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.492 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:13:25.492 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:25.492 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:25.492 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.027 ms 00:13:25.492 00:13:25.492 --- 10.0.0.4 ping statistics --- 00:13:25.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.492 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:13:25.492 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:25.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:25.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:13:25.492 00:13:25.492 --- 10.0.0.1 ping statistics --- 00:13:25.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.492 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:13:25.492 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:25.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:25.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.037 ms 00:13:25.492 00:13:25.492 --- 10.0.0.2 ping statistics --- 00:13:25.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.492 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:13:25.492 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:25.492 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@461 -- # return 0 00:13:25.492 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:25.492 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:25.492 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:25.492 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:25.492 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:25.492 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:25.492 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:25.492 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:25.492 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:25.492 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:25.492 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.492 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=85398 00:13:25.492 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 85398 00:13:25.492 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:25.492 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 85398 ']' 00:13:25.492 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.492 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:25.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.492 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.492 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:25.492 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.750 [2024-11-27 06:15:45.046835] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:13:25.750 [2024-11-27 06:15:45.047059] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.750 [2024-11-27 06:15:45.165157] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:25.750 [2024-11-27 06:15:45.191548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:25.750 [2024-11-27 06:15:45.210861] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:25.750 [2024-11-27 06:15:45.211041] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:25.750 [2024-11-27 06:15:45.211099] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:25.750 [2024-11-27 06:15:45.211144] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:25.750 [2024-11-27 06:15:45.211162] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:25.750 [2024-11-27 06:15:45.211824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:25.750 [2024-11-27 06:15:45.212047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:25.750 [2024-11-27 06:15:45.212222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:25.750 [2024-11-27 06:15:45.212225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.723 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:26.723 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:13:26.723 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:26.723 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:26.723 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:26.723 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:26.723 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:26.723 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.723 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:26.723 [2024-11-27 06:15:45.947832] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:26.723 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.723 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:26.723 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:26.723 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:26.723 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.723 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:26.723 Null1 00:13:26.723 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.723 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:26.723 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.723 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:26.723 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.723 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:26.723 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.723 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:26.723 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.723 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:26.723 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.723 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:26.724 [2024-11-27 06:15:45.991952] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:26.724 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.724 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:26.724 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:26.724 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.724 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:26.724 Null2 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:26.724 Null3 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:26.724 Null4 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.3 -s 4430 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -a 10.0.0.3 -s 4420 00:13:26.724 00:13:26.724 Discovery Log Number of Records 6, Generation counter 6 00:13:26.724 =====Discovery Log Entry 0====== 00:13:26.724 trtype: tcp 00:13:26.724 adrfam: ipv4 00:13:26.724 subtype: current discovery subsystem 00:13:26.724 treq: not required 00:13:26.724 portid: 0 00:13:26.724 trsvcid: 4420 00:13:26.724 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:26.724 traddr: 10.0.0.3 00:13:26.724 eflags: explicit discovery connections, duplicate discovery information 00:13:26.724 sectype: none 00:13:26.724 =====Discovery Log Entry 1====== 00:13:26.724 trtype: tcp 00:13:26.724 adrfam: ipv4 00:13:26.724 subtype: nvme subsystem 00:13:26.724 treq: not required 00:13:26.724 portid: 0 00:13:26.724 trsvcid: 4420 00:13:26.724 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:26.724 traddr: 10.0.0.3 00:13:26.724 eflags: none 00:13:26.724 sectype: none 00:13:26.724 =====Discovery Log Entry 2====== 00:13:26.724 trtype: tcp 00:13:26.724 adrfam: ipv4 00:13:26.724 subtype: nvme subsystem 00:13:26.724 treq: not required 00:13:26.724 portid: 0 00:13:26.724 trsvcid: 4420 00:13:26.724 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:26.724 traddr: 10.0.0.3 00:13:26.724 eflags: none 00:13:26.724 sectype: none 00:13:26.724 =====Discovery Log Entry 3====== 00:13:26.724 trtype: tcp 00:13:26.724 adrfam: ipv4 00:13:26.724 subtype: nvme subsystem 00:13:26.724 treq: not required 00:13:26.724 portid: 0 00:13:26.724 trsvcid: 4420 00:13:26.724 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:26.724 traddr: 10.0.0.3 00:13:26.724 eflags: none 00:13:26.724 sectype: none 00:13:26.724 =====Discovery Log Entry 4====== 00:13:26.724 trtype: tcp 00:13:26.724 adrfam: ipv4 00:13:26.724 subtype: nvme subsystem 00:13:26.724 treq: not required 00:13:26.724 portid: 0 00:13:26.724 trsvcid: 4420 00:13:26.724 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:26.724 traddr: 10.0.0.3 00:13:26.724 eflags: none 00:13:26.724 sectype: none 00:13:26.724 =====Discovery Log Entry 5====== 00:13:26.724 trtype: tcp 00:13:26.724 adrfam: ipv4 00:13:26.724 subtype: discovery subsystem referral 00:13:26.724 treq: not required 00:13:26.724 portid: 0 00:13:26.724 trsvcid: 4430 00:13:26.724 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:26.724 traddr: 10.0.0.3 00:13:26.724 eflags: none 00:13:26.724 sectype: none 00:13:26.724 Perform nvmf subsystem discovery via RPC 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.724 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:26.724 [ 00:13:26.724 { 00:13:26.724 "allow_any_host": true, 00:13:26.724 "hosts": [], 00:13:26.724 "listen_addresses": [ 00:13:26.724 { 00:13:26.724 "adrfam": "IPv4", 00:13:26.724 "traddr": "10.0.0.3", 00:13:26.724 "trsvcid": "4420", 00:13:26.724 "trtype": "TCP" 00:13:26.724 } 00:13:26.724 ], 00:13:26.724 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:26.724 "subtype": "Discovery" 00:13:26.724 }, 00:13:26.724 { 00:13:26.724 "allow_any_host": true, 00:13:26.724 "hosts": [], 00:13:26.724 "listen_addresses": [ 00:13:26.724 { 00:13:26.724 "adrfam": "IPv4", 00:13:26.725 "traddr": "10.0.0.3", 00:13:26.725 "trsvcid": "4420", 00:13:26.725 "trtype": "TCP" 00:13:26.725 } 00:13:26.725 ], 00:13:26.725 "max_cntlid": 65519, 00:13:26.725 "max_namespaces": 32, 00:13:26.725 "min_cntlid": 1, 00:13:26.725 "model_number": "SPDK bdev Controller", 00:13:26.725 "namespaces": [ 00:13:26.725 { 00:13:26.725 "bdev_name": "Null1", 00:13:26.725 "name": "Null1", 00:13:26.725 "nguid": "7B3D87D5AD07453D99175ECD9A10BDEE", 00:13:26.725 "nsid": 1, 00:13:26.725 "uuid": "7b3d87d5-ad07-453d-9917-5ecd9a10bdee" 00:13:26.725 } 00:13:26.725 ], 00:13:26.725 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:26.725 "serial_number": "SPDK00000000000001", 00:13:26.725 "subtype": "NVMe" 00:13:26.725 }, 00:13:26.725 { 00:13:26.725 "allow_any_host": true, 00:13:26.725 "hosts": [], 00:13:26.725 "listen_addresses": [ 00:13:26.725 { 00:13:26.725 "adrfam": "IPv4", 00:13:26.725 "traddr": "10.0.0.3", 00:13:26.725 "trsvcid": "4420", 00:13:26.725 "trtype": "TCP" 00:13:26.725 } 00:13:26.725 ], 00:13:26.725 "max_cntlid": 65519, 00:13:26.725 "max_namespaces": 32, 00:13:26.725 "min_cntlid": 1, 00:13:26.725 "model_number": "SPDK bdev Controller", 00:13:26.725 "namespaces": [ 00:13:26.725 { 00:13:26.725 "bdev_name": "Null2", 00:13:26.725 "name": "Null2", 00:13:26.725 "nguid": "7D221DE12DEA4B2FBED5515E2AC29F00", 00:13:26.725 "nsid": 1, 00:13:26.725 "uuid": "7d221de1-2dea-4b2f-bed5-515e2ac29f00" 00:13:26.725 } 00:13:26.725 ], 00:13:26.725 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:26.725 "serial_number": "SPDK00000000000002", 00:13:26.725 "subtype": "NVMe" 00:13:26.725 }, 00:13:26.725 { 00:13:26.725 "allow_any_host": true, 00:13:26.725 "hosts": [], 00:13:26.725 "listen_addresses": [ 00:13:26.725 { 00:13:26.725 "adrfam": "IPv4", 00:13:26.725 "traddr": "10.0.0.3", 00:13:26.725 "trsvcid": "4420", 00:13:26.725 "trtype": "TCP" 00:13:26.725 } 00:13:26.725 ], 00:13:26.725 "max_cntlid": 65519, 00:13:26.725 "max_namespaces": 32, 00:13:26.725 "min_cntlid": 1, 00:13:26.725 "model_number": "SPDK bdev Controller", 00:13:26.725 "namespaces": [ 00:13:26.725 { 00:13:26.725 "bdev_name": "Null3", 00:13:26.725 "name": "Null3", 00:13:26.725 "nguid": "AA2F29E8ED984882AF231F80B906C7DC", 00:13:26.725 "nsid": 1, 00:13:26.725 "uuid": "aa2f29e8-ed98-4882-af23-1f80b906c7dc" 00:13:26.725 } 00:13:26.725 ], 00:13:26.725 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:26.725 "serial_number": "SPDK00000000000003", 00:13:26.725 "subtype": "NVMe" 00:13:26.725 }, 00:13:26.725 { 00:13:26.725 "allow_any_host": true, 00:13:26.725 "hosts": [], 00:13:26.725 "listen_addresses": [ 00:13:26.725 { 00:13:26.725 "adrfam": "IPv4", 00:13:26.725 "traddr": "10.0.0.3", 00:13:26.725 "trsvcid": "4420", 00:13:26.725 "trtype": "TCP" 00:13:26.725 } 00:13:26.725 ], 00:13:26.725 "max_cntlid": 65519, 00:13:26.725 "max_namespaces": 32, 00:13:26.725 "min_cntlid": 1, 00:13:26.725 "model_number": "SPDK bdev Controller", 00:13:26.725 "namespaces": [ 00:13:26.725 { 00:13:26.725 "bdev_name": "Null4", 00:13:26.725 "name": "Null4", 00:13:26.725 "nguid": "F3FADB97304B42FCB4CAB04BCBD300A7", 00:13:26.725 "nsid": 1, 00:13:26.725 "uuid": "f3fadb97-304b-42fc-b4ca-b04bcbd300a7" 00:13:26.725 } 00:13:26.725 ], 00:13:26.725 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:26.725 "serial_number": "SPDK00000000000004", 00:13:26.725 "subtype": "NVMe" 00:13:26.725 } 00:13:26.725 ] 00:13:26.725 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.725 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:26.725 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:26.725 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:26.725 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.725 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:26.725 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.725 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:26.725 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.725 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.3 -s 4430 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:26.984 rmmod nvme_tcp 00:13:26.984 rmmod nvme_fabrics 00:13:26.984 rmmod nvme_keyring 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 85398 ']' 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 85398 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 85398 ']' 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 85398 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85398 00:13:26.984 killing process with pid 85398 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85398' 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 85398 00:13:26.984 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 85398 00:13:27.242 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:27.242 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:27.242 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:27.242 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:13:27.242 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:27.242 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:13:27.242 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:13:27.242 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:27.242 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:27.242 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:27.242 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:27.242 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:27.242 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:27.242 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:27.242 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:27.242 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:27.242 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:27.242 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:27.242 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:27.242 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:27.242 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:27.242 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:27.242 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:27.242 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.242 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:27.242 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.242 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@300 -- # return 0 00:13:27.242 00:13:27.242 real 0m2.218s 00:13:27.242 user 0m5.699s 00:13:27.242 sys 0m0.534s 00:13:27.242 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:27.242 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:27.242 ************************************ 00:13:27.242 END TEST nvmf_target_discovery 00:13:27.242 ************************************ 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:27.501 ************************************ 00:13:27.501 START TEST nvmf_referrals 00:13:27.501 ************************************ 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:27.501 * Looking for test storage... 00:13:27.501 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:27.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.501 --rc genhtml_branch_coverage=1 00:13:27.501 --rc genhtml_function_coverage=1 00:13:27.501 --rc genhtml_legend=1 00:13:27.501 --rc geninfo_all_blocks=1 00:13:27.501 --rc geninfo_unexecuted_blocks=1 00:13:27.501 00:13:27.501 ' 00:13:27.501 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:27.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.501 --rc genhtml_branch_coverage=1 00:13:27.501 --rc genhtml_function_coverage=1 00:13:27.501 --rc genhtml_legend=1 00:13:27.501 --rc geninfo_all_blocks=1 00:13:27.501 --rc geninfo_unexecuted_blocks=1 00:13:27.501 00:13:27.501 ' 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:27.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.502 --rc genhtml_branch_coverage=1 00:13:27.502 --rc genhtml_function_coverage=1 00:13:27.502 --rc genhtml_legend=1 00:13:27.502 --rc geninfo_all_blocks=1 00:13:27.502 --rc geninfo_unexecuted_blocks=1 00:13:27.502 00:13:27.502 ' 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:27.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.502 --rc genhtml_branch_coverage=1 00:13:27.502 --rc genhtml_function_coverage=1 00:13:27.502 --rc genhtml_legend=1 00:13:27.502 --rc geninfo_all_blocks=1 00:13:27.502 --rc geninfo_unexecuted_blocks=1 00:13:27.502 00:13:27.502 ' 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:27.502 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:27.502 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:27.503 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:27.503 Cannot find device "nvmf_init_br" 00:13:27.503 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:13:27.503 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:27.503 Cannot find device "nvmf_init_br2" 00:13:27.503 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:13:27.503 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:27.503 Cannot find device "nvmf_tgt_br" 00:13:27.503 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # true 00:13:27.503 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:27.503 Cannot find device "nvmf_tgt_br2" 00:13:27.503 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # true 00:13:27.503 06:15:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:27.503 Cannot find device "nvmf_init_br" 00:13:27.503 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # true 00:13:27.503 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:27.503 Cannot find device "nvmf_init_br2" 00:13:27.503 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # true 00:13:27.503 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:27.503 Cannot find device "nvmf_tgt_br" 00:13:27.503 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # true 00:13:27.503 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:27.761 Cannot find device "nvmf_tgt_br2" 00:13:27.761 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # true 00:13:27.761 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:27.761 Cannot find device "nvmf_br" 00:13:27.761 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # true 00:13:27.761 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:27.761 Cannot find device "nvmf_init_if" 00:13:27.761 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # true 00:13:27.761 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:27.761 Cannot find device "nvmf_init_if2" 00:13:27.761 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # true 00:13:27.761 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:27.761 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:27.761 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # true 00:13:27.761 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:27.761 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:27.761 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # true 00:13:27.761 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:27.761 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:27.761 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:27.761 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:27.761 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:27.761 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:27.761 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:27.761 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:27.761 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:27.761 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:27.761 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:27.762 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:27.762 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:13:27.762 00:13:27.762 --- 10.0.0.3 ping statistics --- 00:13:27.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.762 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:27.762 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:27.762 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:13:27.762 00:13:27.762 --- 10.0.0.4 ping statistics --- 00:13:27.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.762 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:27.762 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:27.762 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:13:27.762 00:13:27.762 --- 10.0.0.1 ping statistics --- 00:13:27.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.762 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:27.762 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:27.762 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.033 ms 00:13:27.762 00:13:27.762 --- 10.0.0.2 ping statistics --- 00:13:27.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.762 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@461 -- # return 0 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:27.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=85670 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 85670 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 85670 ']' 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:27.762 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.019 [2024-11-27 06:15:47.298205] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:13:28.019 [2024-11-27 06:15:47.298375] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.019 [2024-11-27 06:15:47.412302] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:28.019 [2024-11-27 06:15:47.438016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:28.019 [2024-11-27 06:15:47.457540] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.019 [2024-11-27 06:15:47.457693] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.019 [2024-11-27 06:15:47.457749] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:28.019 [2024-11-27 06:15:47.458157] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:28.019 [2024-11-27 06:15:47.458204] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.019 [2024-11-27 06:15:47.458948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.019 [2024-11-27 06:15:47.459262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.019 [2024-11-27 06:15:47.459430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:28.019 [2024-11-27 06:15:47.459442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.952 [2024-11-27 06:15:48.207320] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.3 -s 8009 discovery 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.952 [2024-11-27 06:15:48.219425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -a 10.0.0.3 -s 8009 -o json 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:28.952 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:28.953 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -a 10.0.0.3 -s 8009 -o json 00:13:28.953 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:28.953 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:29.211 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:29.211 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:29.211 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:29.211 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.211 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.211 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.211 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:29.211 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.211 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.211 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.211 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:29.211 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:29.211 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:29.211 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.211 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:29.211 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.211 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:29.211 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.211 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:29.211 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:29.211 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:29.211 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:29.211 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:29.211 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -a 10.0.0.3 -s 8009 -o json 00:13:29.212 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:29.212 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:29.469 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:29.469 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:29.469 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:29.469 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:29.469 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:29.469 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:29.469 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -a 10.0.0.3 -s 8009 -o json 00:13:29.469 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:29.469 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:29.469 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:29.469 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:29.469 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -a 10.0.0.3 -s 8009 -o json 00:13:29.469 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:29.469 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:29.469 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:29.469 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.469 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.727 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.727 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:29.727 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:29.727 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:29.727 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.727 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:29.727 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.727 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:29.727 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.727 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:29.727 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:29.727 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:29.727 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:29.727 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:29.727 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:29.727 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:29.727 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -a 10.0.0.3 -s 8009 -o json 00:13:29.727 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:29.727 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:29.727 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:29.727 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:29.727 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:29.727 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:29.727 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -a 10.0.0.3 -s 8009 -o json 00:13:29.985 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:29.985 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:29.985 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:29.985 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:29.985 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -a 10.0.0.3 -s 8009 -o json 00:13:29.985 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:29.985 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:29.985 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:29.985 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.985 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.985 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.985 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:29.985 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.985 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.985 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:29.985 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.985 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:29.985 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:29.985 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:29.985 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:29.985 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -a 10.0.0.3 -s 8009 -o json 00:13:29.985 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:29.985 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:30.243 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:30.243 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:30.243 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:30.243 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:30.243 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:30.243 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:13:30.243 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:30.243 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:13:30.243 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:30.243 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:30.243 rmmod nvme_tcp 00:13:30.243 rmmod nvme_fabrics 00:13:30.243 rmmod nvme_keyring 00:13:30.243 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:30.243 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:13:30.243 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:13:30.243 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 85670 ']' 00:13:30.243 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 85670 00:13:30.243 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 85670 ']' 00:13:30.243 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 85670 00:13:30.243 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:13:30.243 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:30.243 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85670 00:13:30.243 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:30.243 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:30.243 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85670' 00:13:30.243 killing process with pid 85670 00:13:30.243 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 85670 00:13:30.243 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 85670 00:13:30.501 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:30.502 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:30.502 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:30.502 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:13:30.502 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:13:30.502 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:13:30.502 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:30.502 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:30.502 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:30.502 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:30.502 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:30.502 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:30.502 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:30.502 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:30.502 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:30.502 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:30.502 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:30.502 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:30.502 06:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:30.502 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:30.502 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:30.760 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:30.760 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:30.760 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.760 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@300 -- # return 0 00:13:30.761 ************************************ 00:13:30.761 END TEST nvmf_referrals 00:13:30.761 ************************************ 00:13:30.761 00:13:30.761 real 0m3.298s 00:13:30.761 user 0m10.364s 00:13:30.761 sys 0m0.731s 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:30.761 ************************************ 00:13:30.761 START TEST nvmf_connect_disconnect 00:13:30.761 ************************************ 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:30.761 * Looking for test storage... 00:13:30.761 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:30.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.761 --rc genhtml_branch_coverage=1 00:13:30.761 --rc genhtml_function_coverage=1 00:13:30.761 --rc genhtml_legend=1 00:13:30.761 --rc geninfo_all_blocks=1 00:13:30.761 --rc geninfo_unexecuted_blocks=1 00:13:30.761 00:13:30.761 ' 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:30.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.761 --rc genhtml_branch_coverage=1 00:13:30.761 --rc genhtml_function_coverage=1 00:13:30.761 --rc genhtml_legend=1 00:13:30.761 --rc geninfo_all_blocks=1 00:13:30.761 --rc geninfo_unexecuted_blocks=1 00:13:30.761 00:13:30.761 ' 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:30.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.761 --rc genhtml_branch_coverage=1 00:13:30.761 --rc genhtml_function_coverage=1 00:13:30.761 --rc genhtml_legend=1 00:13:30.761 --rc geninfo_all_blocks=1 00:13:30.761 --rc geninfo_unexecuted_blocks=1 00:13:30.761 00:13:30.761 ' 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:30.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.761 --rc genhtml_branch_coverage=1 00:13:30.761 --rc genhtml_function_coverage=1 00:13:30.761 --rc genhtml_legend=1 00:13:30.761 --rc geninfo_all_blocks=1 00:13:30.761 --rc geninfo_unexecuted_blocks=1 00:13:30.761 00:13:30.761 ' 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:13:30.761 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:13:30.762 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:30.762 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:30.762 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:30.762 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:30.762 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:30.762 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:13:30.762 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:30.762 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:30.762 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:30.762 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.762 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.762 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.762 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:30.762 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.762 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:13:30.762 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:30.762 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:30.762 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:30.762 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:30.762 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:30.762 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:30.762 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:30.762 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:30.762 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:30.762 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:30.762 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:30.762 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:30.762 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:30.762 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:30.762 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:30.762 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:30.762 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:31.020 Cannot find device "nvmf_init_br" 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:31.020 Cannot find device "nvmf_init_br2" 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:31.020 Cannot find device "nvmf_tgt_br" 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # true 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:31.020 Cannot find device "nvmf_tgt_br2" 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # true 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:31.020 Cannot find device "nvmf_init_br" 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # true 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:31.020 Cannot find device "nvmf_init_br2" 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # true 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:31.020 Cannot find device "nvmf_tgt_br" 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # true 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:31.020 Cannot find device "nvmf_tgt_br2" 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # true 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:31.020 Cannot find device "nvmf_br" 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # true 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:31.020 Cannot find device "nvmf_init_if" 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # true 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:31.020 Cannot find device "nvmf_init_if2" 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # true 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:31.020 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # true 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:31.020 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # true 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:31.020 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:31.021 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:31.021 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:31.021 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:31.021 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:31.021 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:31.021 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:31.021 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:31.021 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:31.021 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:31.021 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:31.021 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:31.021 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:31.021 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:31.021 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:31.021 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:31.021 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:31.021 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:31.021 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:31.021 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:31.021 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:31.021 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:31.279 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:31.279 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:31.279 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:31.279 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:31.279 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:31.279 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:31.279 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:31.280 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:31.280 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:31.280 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:13:31.280 00:13:31.280 --- 10.0.0.3 ping statistics --- 00:13:31.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.280 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:13:31.280 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:31.280 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:31.280 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.032 ms 00:13:31.280 00:13:31.280 --- 10.0.0.4 ping statistics --- 00:13:31.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.280 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:13:31.280 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:31.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:31.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:13:31.280 00:13:31.280 --- 10.0.0.1 ping statistics --- 00:13:31.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.280 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:13:31.280 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:31.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:31.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:13:31.280 00:13:31.280 --- 10.0.0.2 ping statistics --- 00:13:31.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.280 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:13:31.280 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:31.280 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@461 -- # return 0 00:13:31.280 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:31.280 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:31.280 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:31.280 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:31.280 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:31.280 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:31.280 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:31.280 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:31.280 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:31.280 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:31.280 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:31.280 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=86022 00:13:31.280 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 86022 00:13:31.280 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 86022 ']' 00:13:31.280 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:31.280 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.280 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:31.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.280 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.280 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:31.280 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:31.280 [2024-11-27 06:15:50.650726] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:13:31.280 [2024-11-27 06:15:50.650787] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.280 [2024-11-27 06:15:50.766205] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:31.280 [2024-11-27 06:15:50.791470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:31.538 [2024-11-27 06:15:50.810847] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:31.538 [2024-11-27 06:15:50.810882] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:31.538 [2024-11-27 06:15:50.810888] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:31.538 [2024-11-27 06:15:50.810893] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:31.538 [2024-11-27 06:15:50.810897] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:31.538 [2024-11-27 06:15:50.811553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.538 [2024-11-27 06:15:50.811768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:31.538 [2024-11-27 06:15:50.812234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:31.538 [2024-11-27 06:15:50.812239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.103 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:32.103 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:13:32.103 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:32.103 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:32.103 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:32.103 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:32.103 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:32.103 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.103 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:32.103 [2024-11-27 06:15:51.566870] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:32.103 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.103 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:32.103 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.103 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:32.103 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.103 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:32.103 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:32.103 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.103 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:32.103 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.103 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:32.103 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.103 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:32.103 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.103 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:32.103 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.103 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:32.103 [2024-11-27 06:15:51.622954] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:32.103 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.103 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:13:32.103 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:13:32.103 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:13:32.103 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:34.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.071 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.496 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.928 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.660 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.183 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.080 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.798 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.053 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.910 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.434 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.777 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.209 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.638 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:16.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:23.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:29.992 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.898 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:34.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:36.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:38.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:40.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:45.356 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:47.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:49.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:51.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:54.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:56.803 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:58.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:00.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:03.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:05.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:07.565 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:09.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:11.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:13.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:13.893 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:17:13.893 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:17:13.893 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:13.893 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:17:13.893 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:13.893 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:17:13.893 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:13.893 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:13.893 rmmod nvme_tcp 00:17:13.893 rmmod nvme_fabrics 00:17:13.893 rmmod nvme_keyring 00:17:14.152 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:14.152 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:17:14.152 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:17:14.152 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 86022 ']' 00:17:14.152 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 86022 00:17:14.152 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 86022 ']' 00:17:14.152 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 86022 00:17:14.152 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:17:14.152 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:14.152 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86022 00:17:14.152 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:14.152 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:14.152 killing process with pid 86022 00:17:14.152 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86022' 00:17:14.152 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 86022 00:17:14.152 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 86022 00:17:14.152 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:14.152 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:14.152 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:14.152 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:17:14.152 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:17:14.152 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:17:14.152 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:14.152 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:14.152 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:14.152 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:14.152 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:14.152 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:14.152 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:14.152 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:14.152 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:14.152 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:14.152 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:14.152 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:14.411 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:14.411 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:14.411 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:14.411 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:14.411 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:14.411 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.411 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:14.411 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.411 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@300 -- # return 0 00:17:14.411 00:17:14.411 real 3m43.668s 00:17:14.411 user 14m35.740s 00:17:14.411 sys 0m17.378s 00:17:14.412 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:14.412 ************************************ 00:17:14.412 END TEST nvmf_connect_disconnect 00:17:14.412 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:14.412 ************************************ 00:17:14.412 06:19:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:14.412 06:19:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:14.412 06:19:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:14.412 06:19:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:14.412 ************************************ 00:17:14.412 START TEST nvmf_multitarget 00:17:14.412 ************************************ 00:17:14.412 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:14.412 * Looking for test storage... 00:17:14.412 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:14.412 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:14.412 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:17:14.412 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:14.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.671 --rc genhtml_branch_coverage=1 00:17:14.671 --rc genhtml_function_coverage=1 00:17:14.671 --rc genhtml_legend=1 00:17:14.671 --rc geninfo_all_blocks=1 00:17:14.671 --rc geninfo_unexecuted_blocks=1 00:17:14.671 00:17:14.671 ' 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:14.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.671 --rc genhtml_branch_coverage=1 00:17:14.671 --rc genhtml_function_coverage=1 00:17:14.671 --rc genhtml_legend=1 00:17:14.671 --rc geninfo_all_blocks=1 00:17:14.671 --rc geninfo_unexecuted_blocks=1 00:17:14.671 00:17:14.671 ' 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:14.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.671 --rc genhtml_branch_coverage=1 00:17:14.671 --rc genhtml_function_coverage=1 00:17:14.671 --rc genhtml_legend=1 00:17:14.671 --rc geninfo_all_blocks=1 00:17:14.671 --rc geninfo_unexecuted_blocks=1 00:17:14.671 00:17:14.671 ' 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:14.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.671 --rc genhtml_branch_coverage=1 00:17:14.671 --rc genhtml_function_coverage=1 00:17:14.671 --rc genhtml_legend=1 00:17:14.671 --rc geninfo_all_blocks=1 00:17:14.671 --rc geninfo_unexecuted_blocks=1 00:17:14.671 00:17:14.671 ' 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:14.671 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:14.671 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:14.671 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:14.671 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:17:14.671 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:17:14.671 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:14.671 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:14.671 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:14.671 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:14.672 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:14.672 Cannot find device "nvmf_init_br" 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:14.672 Cannot find device "nvmf_init_br2" 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:14.672 Cannot find device "nvmf_tgt_br" 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # true 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:14.672 Cannot find device "nvmf_tgt_br2" 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # true 00:17:14.672 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:14.672 Cannot find device "nvmf_init_br" 00:17:14.673 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # true 00:17:14.673 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:14.673 Cannot find device "nvmf_init_br2" 00:17:14.673 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # true 00:17:14.673 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:14.673 Cannot find device "nvmf_tgt_br" 00:17:14.673 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # true 00:17:14.673 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:14.673 Cannot find device "nvmf_tgt_br2" 00:17:14.673 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # true 00:17:14.673 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:14.673 Cannot find device "nvmf_br" 00:17:14.673 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # true 00:17:14.673 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:14.673 Cannot find device "nvmf_init_if" 00:17:14.673 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # true 00:17:14.673 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:14.673 Cannot find device "nvmf_init_if2" 00:17:14.673 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # true 00:17:14.673 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:14.673 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:14.673 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # true 00:17:14.673 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:14.673 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:14.673 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # true 00:17:14.673 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:14.673 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:14.673 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:14.673 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:14.673 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:14.673 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:14.673 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:14.673 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:14.673 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:14.931 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:14.931 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:14.931 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:14.931 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:14.931 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:14.931 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:14.931 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:14.931 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:14.931 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:14.931 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:14.931 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:14.931 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:14.931 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:14.931 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:14.931 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:14.931 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:14.931 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:14.931 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:14.931 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:14.931 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:14.931 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:14.931 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:14.931 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:14.931 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:14.931 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:14.931 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:17:14.931 00:17:14.931 --- 10.0.0.3 ping statistics --- 00:17:14.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.931 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:17:14.931 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:14.931 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:14.931 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:17:14.931 00:17:14.931 --- 10.0.0.4 ping statistics --- 00:17:14.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.932 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:17:14.932 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:14.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:14.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.015 ms 00:17:14.932 00:17:14.932 --- 10.0.0.1 ping statistics --- 00:17:14.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.932 rtt min/avg/max/mdev = 0.015/0.015/0.015/0.000 ms 00:17:14.932 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:14.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:14.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms 00:17:14.932 00:17:14.932 --- 10.0.0.2 ping statistics --- 00:17:14.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.932 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:17:14.932 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:14.932 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@461 -- # return 0 00:17:14.932 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:14.932 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:14.932 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:14.932 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:14.932 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:14.932 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:14.932 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:14.932 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:17:14.932 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:14.932 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:14.932 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:14.932 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=89849 00:17:14.932 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 89849 00:17:14.932 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 89849 ']' 00:17:14.932 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:14.932 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.932 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:14.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.932 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.932 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:14.932 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:14.932 [2024-11-27 06:19:34.352419] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:17:14.932 [2024-11-27 06:19:34.352474] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:15.190 [2024-11-27 06:19:34.467649] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:15.190 [2024-11-27 06:19:34.496618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:15.190 [2024-11-27 06:19:34.515493] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:15.190 [2024-11-27 06:19:34.515533] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:15.190 [2024-11-27 06:19:34.515539] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:15.190 [2024-11-27 06:19:34.515544] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:15.190 [2024-11-27 06:19:34.515548] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:15.190 [2024-11-27 06:19:34.516232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.190 [2024-11-27 06:19:34.516288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:15.190 [2024-11-27 06:19:34.516409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:15.190 [2024-11-27 06:19:34.516586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.757 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:15.757 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:17:15.757 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:15.757 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:15.757 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:15.757 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:15.757 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:15.757 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:17:15.757 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:16.015 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:17:16.015 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:17:16.015 "nvmf_tgt_1" 00:17:16.016 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:17:16.016 "nvmf_tgt_2" 00:17:16.016 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:16.016 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:17:16.274 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:17:16.274 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:17:16.274 true 00:17:16.274 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:17:16.531 true 00:17:16.532 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:16.532 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:17:16.532 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:17:16.532 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:16.532 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:17:16.532 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:16.532 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:17:16.790 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:16.790 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:17:16.790 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:16.790 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:16.790 rmmod nvme_tcp 00:17:16.790 rmmod nvme_fabrics 00:17:16.790 rmmod nvme_keyring 00:17:16.790 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:16.790 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:17:16.790 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:17:16.790 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 89849 ']' 00:17:16.790 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 89849 00:17:16.790 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 89849 ']' 00:17:16.790 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 89849 00:17:16.790 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:17:16.790 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:16.790 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89849 00:17:16.790 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:16.790 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:16.790 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89849' 00:17:16.790 killing process with pid 89849 00:17:16.790 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 89849 00:17:16.790 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 89849 00:17:17.048 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:17.048 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:17.048 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:17.048 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:17:17.048 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:17.048 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:17:17.048 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:17:17.048 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:17.048 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:17.048 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:17.048 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:17.049 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:17.049 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:17.049 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:17.049 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:17.049 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:17.049 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:17.049 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:17.049 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:17.049 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:17.049 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:17.049 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@300 -- # return 0 00:17:17.308 00:17:17.308 real 0m2.777s 00:17:17.308 user 0m8.459s 00:17:17.308 sys 0m0.608s 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:17.308 ************************************ 00:17:17.308 END TEST nvmf_multitarget 00:17:17.308 ************************************ 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:17.308 ************************************ 00:17:17.308 START TEST nvmf_rpc 00:17:17.308 ************************************ 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:17.308 * Looking for test storage... 00:17:17.308 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:17.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.308 --rc genhtml_branch_coverage=1 00:17:17.308 --rc genhtml_function_coverage=1 00:17:17.308 --rc genhtml_legend=1 00:17:17.308 --rc geninfo_all_blocks=1 00:17:17.308 --rc geninfo_unexecuted_blocks=1 00:17:17.308 00:17:17.308 ' 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:17.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.308 --rc genhtml_branch_coverage=1 00:17:17.308 --rc genhtml_function_coverage=1 00:17:17.308 --rc genhtml_legend=1 00:17:17.308 --rc geninfo_all_blocks=1 00:17:17.308 --rc geninfo_unexecuted_blocks=1 00:17:17.308 00:17:17.308 ' 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:17.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.308 --rc genhtml_branch_coverage=1 00:17:17.308 --rc genhtml_function_coverage=1 00:17:17.308 --rc genhtml_legend=1 00:17:17.308 --rc geninfo_all_blocks=1 00:17:17.308 --rc geninfo_unexecuted_blocks=1 00:17:17.308 00:17:17.308 ' 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:17.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.308 --rc genhtml_branch_coverage=1 00:17:17.308 --rc genhtml_function_coverage=1 00:17:17.308 --rc genhtml_legend=1 00:17:17.308 --rc geninfo_all_blocks=1 00:17:17.308 --rc geninfo_unexecuted_blocks=1 00:17:17.308 00:17:17.308 ' 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:17:17.308 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:17.309 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:17.309 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:17.567 Cannot find device "nvmf_init_br" 00:17:17.567 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:17:17.567 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:17.567 Cannot find device "nvmf_init_br2" 00:17:17.567 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:17:17.567 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:17.567 Cannot find device "nvmf_tgt_br" 00:17:17.567 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # true 00:17:17.567 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:17.567 Cannot find device "nvmf_tgt_br2" 00:17:17.567 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # true 00:17:17.567 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:17.567 Cannot find device "nvmf_init_br" 00:17:17.567 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # true 00:17:17.567 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:17.567 Cannot find device "nvmf_init_br2" 00:17:17.567 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # true 00:17:17.567 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:17.567 Cannot find device "nvmf_tgt_br" 00:17:17.567 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # true 00:17:17.567 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:17.567 Cannot find device "nvmf_tgt_br2" 00:17:17.567 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # true 00:17:17.567 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:17.567 Cannot find device "nvmf_br" 00:17:17.567 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # true 00:17:17.567 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:17.567 Cannot find device "nvmf_init_if" 00:17:17.567 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # true 00:17:17.567 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:17.567 Cannot find device "nvmf_init_if2" 00:17:17.567 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # true 00:17:17.567 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:17.567 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:17.567 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # true 00:17:17.567 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:17.567 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:17.567 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # true 00:17:17.567 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:17.567 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:17.567 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:17.567 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:17.567 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:17.567 06:19:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:17.567 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:17.567 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:17.567 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:17.567 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:17.567 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:17.567 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:17.567 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:17.567 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:17.567 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:17.567 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:17.567 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:17.567 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:17.567 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:17.567 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:17.567 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:17.568 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:17.568 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:17.568 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:17.568 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:17.826 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:17.826 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:17.826 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:17.826 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:17.826 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:17.826 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:17.826 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:17.826 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:17.826 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:17.826 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:17:17.826 00:17:17.826 --- 10.0.0.3 ping statistics --- 00:17:17.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.826 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:17:17.826 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:17.826 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:17.826 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:17:17.826 00:17:17.826 --- 10.0.0.4 ping statistics --- 00:17:17.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.826 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:17:17.826 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:17.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:17.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:17:17.826 00:17:17.826 --- 10.0.0.1 ping statistics --- 00:17:17.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.826 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:17:17.826 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:17.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:17.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:17:17.826 00:17:17.826 --- 10.0.0.2 ping statistics --- 00:17:17.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.826 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:17:17.826 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:17.826 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@461 -- # return 0 00:17:17.826 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:17.826 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:17.826 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:17.826 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:17.826 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:17.826 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:17.826 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:17.826 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:17.826 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:17.826 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:17.826 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.826 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=90127 00:17:17.826 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 90127 00:17:17.826 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 90127 ']' 00:17:17.826 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.826 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:17.826 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:17.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.826 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.826 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:17.826 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.826 [2024-11-27 06:19:37.208625] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:17:17.826 [2024-11-27 06:19:37.208681] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:17.826 [2024-11-27 06:19:37.323801] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:17.826 [2024-11-27 06:19:37.349672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:18.117 [2024-11-27 06:19:37.368453] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.117 [2024-11-27 06:19:37.368491] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.117 [2024-11-27 06:19:37.368498] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:18.117 [2024-11-27 06:19:37.368503] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:18.117 [2024-11-27 06:19:37.368508] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.117 [2024-11-27 06:19:37.369353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.117 [2024-11-27 06:19:37.369576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:18.117 [2024-11-27 06:19:37.370249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:18.117 [2024-11-27 06:19:37.370371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.688 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:18.688 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:18.688 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:18.688 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:18.688 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.688 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:18.688 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:18.688 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.688 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.688 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.688 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:17:18.688 "poll_groups": [ 00:17:18.688 { 00:17:18.688 "admin_qpairs": 0, 00:17:18.688 "completed_nvme_io": 0, 00:17:18.688 "current_admin_qpairs": 0, 00:17:18.688 "current_io_qpairs": 0, 00:17:18.688 "io_qpairs": 0, 00:17:18.688 "name": "nvmf_tgt_poll_group_000", 00:17:18.688 "pending_bdev_io": 0, 00:17:18.688 "transports": [] 00:17:18.688 }, 00:17:18.688 { 00:17:18.688 "admin_qpairs": 0, 00:17:18.688 "completed_nvme_io": 0, 00:17:18.688 "current_admin_qpairs": 0, 00:17:18.688 "current_io_qpairs": 0, 00:17:18.688 "io_qpairs": 0, 00:17:18.688 "name": "nvmf_tgt_poll_group_001", 00:17:18.688 "pending_bdev_io": 0, 00:17:18.688 "transports": [] 00:17:18.688 }, 00:17:18.688 { 00:17:18.688 "admin_qpairs": 0, 00:17:18.688 "completed_nvme_io": 0, 00:17:18.688 "current_admin_qpairs": 0, 00:17:18.688 "current_io_qpairs": 0, 00:17:18.688 "io_qpairs": 0, 00:17:18.688 "name": "nvmf_tgt_poll_group_002", 00:17:18.688 "pending_bdev_io": 0, 00:17:18.688 "transports": [] 00:17:18.688 }, 00:17:18.688 { 00:17:18.688 "admin_qpairs": 0, 00:17:18.688 "completed_nvme_io": 0, 00:17:18.688 "current_admin_qpairs": 0, 00:17:18.688 "current_io_qpairs": 0, 00:17:18.688 "io_qpairs": 0, 00:17:18.688 "name": "nvmf_tgt_poll_group_003", 00:17:18.688 "pending_bdev_io": 0, 00:17:18.688 "transports": [] 00:17:18.688 } 00:17:18.688 ], 00:17:18.688 "tick_rate": 2600000000 00:17:18.688 }' 00:17:18.688 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:18.689 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:18.689 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:18.689 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:18.689 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:18.689 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:18.689 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:18.689 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:18.689 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.689 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.689 [2024-11-27 06:19:38.148564] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:18.689 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.689 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:18.689 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.689 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.689 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.689 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:17:18.689 "poll_groups": [ 00:17:18.689 { 00:17:18.689 "admin_qpairs": 0, 00:17:18.689 "completed_nvme_io": 0, 00:17:18.689 "current_admin_qpairs": 0, 00:17:18.689 "current_io_qpairs": 0, 00:17:18.689 "io_qpairs": 0, 00:17:18.689 "name": "nvmf_tgt_poll_group_000", 00:17:18.689 "pending_bdev_io": 0, 00:17:18.689 "transports": [ 00:17:18.689 { 00:17:18.689 "trtype": "TCP" 00:17:18.689 } 00:17:18.689 ] 00:17:18.689 }, 00:17:18.689 { 00:17:18.689 "admin_qpairs": 0, 00:17:18.689 "completed_nvme_io": 0, 00:17:18.689 "current_admin_qpairs": 0, 00:17:18.689 "current_io_qpairs": 0, 00:17:18.689 "io_qpairs": 0, 00:17:18.689 "name": "nvmf_tgt_poll_group_001", 00:17:18.689 "pending_bdev_io": 0, 00:17:18.689 "transports": [ 00:17:18.689 { 00:17:18.689 "trtype": "TCP" 00:17:18.689 } 00:17:18.689 ] 00:17:18.689 }, 00:17:18.689 { 00:17:18.689 "admin_qpairs": 0, 00:17:18.689 "completed_nvme_io": 0, 00:17:18.689 "current_admin_qpairs": 0, 00:17:18.689 "current_io_qpairs": 0, 00:17:18.689 "io_qpairs": 0, 00:17:18.689 "name": "nvmf_tgt_poll_group_002", 00:17:18.689 "pending_bdev_io": 0, 00:17:18.689 "transports": [ 00:17:18.689 { 00:17:18.689 "trtype": "TCP" 00:17:18.689 } 00:17:18.689 ] 00:17:18.689 }, 00:17:18.689 { 00:17:18.689 "admin_qpairs": 0, 00:17:18.689 "completed_nvme_io": 0, 00:17:18.689 "current_admin_qpairs": 0, 00:17:18.689 "current_io_qpairs": 0, 00:17:18.689 "io_qpairs": 0, 00:17:18.689 "name": "nvmf_tgt_poll_group_003", 00:17:18.689 "pending_bdev_io": 0, 00:17:18.689 "transports": [ 00:17:18.689 { 00:17:18.689 "trtype": "TCP" 00:17:18.689 } 00:17:18.689 ] 00:17:18.689 } 00:17:18.689 ], 00:17:18.689 "tick_rate": 2600000000 00:17:18.689 }' 00:17:18.689 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:18.689 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:18.689 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:18.689 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:18.689 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:18.689 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:18.689 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:18.689 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:18.689 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.948 Malloc1 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.948 [2024-11-27 06:19:38.293318] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -a 10.0.0.3 -s 4420 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -a 10.0.0.3 -s 4420 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -a 10.0.0.3 -s 4420 00:17:18.948 [2024-11-27 06:19:38.315567] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30' 00:17:18.948 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:18.948 could not add new controller: failed to write to nvme-fabrics device 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.948 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:17:19.206 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:19.206 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:19.207 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:19.207 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:19.207 06:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:21.105 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:21.105 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:21.105 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:21.105 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:21.105 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:21.105 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:21.105 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:21.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:21.105 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:21.105 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:21.105 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:21.105 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:21.105 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:21.105 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:21.105 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:21.105 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:17:21.105 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.105 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.105 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.105 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:17:21.105 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:21.105 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:17:21.105 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:21.105 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:21.105 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:21.105 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:21.105 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:21.106 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:21.106 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:21.106 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:21.106 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:17:21.106 [2024-11-27 06:19:40.586579] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30' 00:17:21.106 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:21.106 could not add new controller: failed to write to nvme-fabrics device 00:17:21.106 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:21.106 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:21.106 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:21.106 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:21.106 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:21.106 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.106 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.106 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.106 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:17:21.391 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:21.391 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:21.391 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:21.391 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:21.391 06:19:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:23.291 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:23.291 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:23.291 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:23.291 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:23.291 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:23.291 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:23.291 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:23.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:23.549 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:23.550 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:23.550 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:23.550 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:23.550 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:23.550 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:23.550 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:23.550 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:23.550 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.550 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.550 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.550 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:23.550 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:23.550 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:23.550 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.550 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.550 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.550 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:23.550 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.550 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.550 [2024-11-27 06:19:42.862877] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:23.550 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.550 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:23.550 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.550 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.550 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.550 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:23.550 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.550 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.550 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.550 06:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:17:23.550 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:23.550 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:23.550 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:23.550 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:23.550 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:26.080 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.080 [2024-11-27 06:19:45.237296] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:26.080 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:28.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.082 [2024-11-27 06:19:47.532256] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.082 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:17:28.340 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:28.340 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:28.340 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:28.340 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:28.340 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:30.239 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:30.239 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:30.239 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:30.239 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:30.239 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:30.239 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:30.239 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:30.497 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:30.497 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:30.497 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:30.497 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:30.497 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:30.497 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:30.497 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:30.497 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:30.497 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:30.497 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.498 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.498 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.498 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:30.498 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.498 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.498 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.498 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:30.498 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:30.498 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.498 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.498 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.498 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:30.498 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.498 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.498 [2024-11-27 06:19:49.935143] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:30.498 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.498 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:30.498 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.498 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.498 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.498 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:30.498 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.498 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.498 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.498 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:17:30.754 06:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:30.754 06:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:30.754 06:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:30.754 06:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:30.754 06:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:32.656 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:32.656 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:32.656 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:32.656 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:32.656 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:32.656 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:32.656 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:32.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:32.913 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:32.913 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:32.913 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:32.913 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:32.913 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:32.913 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:32.913 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:32.913 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:32.913 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.913 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.913 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.913 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:32.913 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.913 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.913 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.913 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:32.913 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:32.913 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.913 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.913 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.914 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:32.914 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.914 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.914 [2024-11-27 06:19:52.242000] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:32.914 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.914 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:32.914 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.914 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.914 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.914 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:32.914 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.914 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.914 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.914 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:17:32.914 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:32.914 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:32.914 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:32.914 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:32.914 06:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:35.445 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.445 [2024-11-27 06:19:54.532483] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.445 [2024-11-27 06:19:54.580509] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.445 [2024-11-27 06:19:54.628537] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:35.445 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.446 [2024-11-27 06:19:54.676586] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.446 [2024-11-27 06:19:54.724646] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:35.446 "poll_groups": [ 00:17:35.446 { 00:17:35.446 "admin_qpairs": 2, 00:17:35.446 "completed_nvme_io": 164, 00:17:35.446 "current_admin_qpairs": 0, 00:17:35.446 "current_io_qpairs": 0, 00:17:35.446 "io_qpairs": 16, 00:17:35.446 "name": "nvmf_tgt_poll_group_000", 00:17:35.446 "pending_bdev_io": 0, 00:17:35.446 "transports": [ 00:17:35.446 { 00:17:35.446 "trtype": "TCP" 00:17:35.446 } 00:17:35.446 ] 00:17:35.446 }, 00:17:35.446 { 00:17:35.446 "admin_qpairs": 3, 00:17:35.446 "completed_nvme_io": 68, 00:17:35.446 "current_admin_qpairs": 0, 00:17:35.446 "current_io_qpairs": 0, 00:17:35.446 "io_qpairs": 17, 00:17:35.446 "name": "nvmf_tgt_poll_group_001", 00:17:35.446 "pending_bdev_io": 0, 00:17:35.446 "transports": [ 00:17:35.446 { 00:17:35.446 "trtype": "TCP" 00:17:35.446 } 00:17:35.446 ] 00:17:35.446 }, 00:17:35.446 { 00:17:35.446 "admin_qpairs": 1, 00:17:35.446 "completed_nvme_io": 71, 00:17:35.446 "current_admin_qpairs": 0, 00:17:35.446 "current_io_qpairs": 0, 00:17:35.446 "io_qpairs": 19, 00:17:35.446 "name": "nvmf_tgt_poll_group_002", 00:17:35.446 "pending_bdev_io": 0, 00:17:35.446 "transports": [ 00:17:35.446 { 00:17:35.446 "trtype": "TCP" 00:17:35.446 } 00:17:35.446 ] 00:17:35.446 }, 00:17:35.446 { 00:17:35.446 "admin_qpairs": 1, 00:17:35.446 "completed_nvme_io": 117, 00:17:35.446 "current_admin_qpairs": 0, 00:17:35.446 "current_io_qpairs": 0, 00:17:35.446 "io_qpairs": 18, 00:17:35.446 "name": "nvmf_tgt_poll_group_003", 00:17:35.446 "pending_bdev_io": 0, 00:17:35.446 "transports": [ 00:17:35.446 { 00:17:35.446 "trtype": "TCP" 00:17:35.446 } 00:17:35.446 ] 00:17:35.446 } 00:17:35.446 ], 00:17:35.446 "tick_rate": 2600000000 00:17:35.446 }' 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:35.446 06:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:36.013 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:36.013 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:36.013 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:36.013 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:36.013 rmmod nvme_tcp 00:17:36.013 rmmod nvme_fabrics 00:17:36.013 rmmod nvme_keyring 00:17:36.013 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:36.271 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:36.271 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:36.271 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 90127 ']' 00:17:36.271 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 90127 00:17:36.271 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 90127 ']' 00:17:36.271 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 90127 00:17:36.271 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:17:36.271 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:36.271 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90127 00:17:36.271 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:36.271 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:36.271 killing process with pid 90127 00:17:36.271 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90127' 00:17:36.271 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 90127 00:17:36.271 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 90127 00:17:36.271 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:36.271 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:36.271 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:36.271 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:36.271 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:17:36.271 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:36.271 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:17:36.271 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:36.271 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:36.271 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:36.271 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:36.271 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:36.271 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:36.271 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:36.272 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:36.272 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:36.272 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:36.272 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:36.602 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:36.602 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:36.602 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:36.602 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:36.602 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:36.602 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.602 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:36.602 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.602 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@300 -- # return 0 00:17:36.602 00:17:36.602 real 0m19.268s 00:17:36.602 user 1m12.448s 00:17:36.602 sys 0m1.999s 00:17:36.602 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:36.602 ************************************ 00:17:36.602 END TEST nvmf_rpc 00:17:36.602 ************************************ 00:17:36.602 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.602 06:19:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:36.602 06:19:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:36.602 06:19:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:36.603 06:19:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:36.603 ************************************ 00:17:36.603 START TEST nvmf_invalid 00:17:36.603 ************************************ 00:17:36.603 06:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:36.603 * Looking for test storage... 00:17:36.603 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:36.603 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:36.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.875 --rc genhtml_branch_coverage=1 00:17:36.875 --rc genhtml_function_coverage=1 00:17:36.875 --rc genhtml_legend=1 00:17:36.875 --rc geninfo_all_blocks=1 00:17:36.875 --rc geninfo_unexecuted_blocks=1 00:17:36.875 00:17:36.875 ' 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:36.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.875 --rc genhtml_branch_coverage=1 00:17:36.875 --rc genhtml_function_coverage=1 00:17:36.875 --rc genhtml_legend=1 00:17:36.875 --rc geninfo_all_blocks=1 00:17:36.875 --rc geninfo_unexecuted_blocks=1 00:17:36.875 00:17:36.875 ' 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:36.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.875 --rc genhtml_branch_coverage=1 00:17:36.875 --rc genhtml_function_coverage=1 00:17:36.875 --rc genhtml_legend=1 00:17:36.875 --rc geninfo_all_blocks=1 00:17:36.875 --rc geninfo_unexecuted_blocks=1 00:17:36.875 00:17:36.875 ' 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:36.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.875 --rc genhtml_branch_coverage=1 00:17:36.875 --rc genhtml_function_coverage=1 00:17:36.875 --rc genhtml_legend=1 00:17:36.875 --rc geninfo_all_blocks=1 00:17:36.875 --rc geninfo_unexecuted_blocks=1 00:17:36.875 00:17:36.875 ' 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:36.875 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:36.875 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:36.876 Cannot find device "nvmf_init_br" 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:36.876 Cannot find device "nvmf_init_br2" 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:36.876 Cannot find device "nvmf_tgt_br" 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # true 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:36.876 Cannot find device "nvmf_tgt_br2" 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # true 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:36.876 Cannot find device "nvmf_init_br" 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # true 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:36.876 Cannot find device "nvmf_init_br2" 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # true 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:36.876 Cannot find device "nvmf_tgt_br" 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # true 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:36.876 Cannot find device "nvmf_tgt_br2" 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # true 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:36.876 Cannot find device "nvmf_br" 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # true 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:36.876 Cannot find device "nvmf_init_if" 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # true 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:36.876 Cannot find device "nvmf_init_if2" 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # true 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:36.876 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # true 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:36.876 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # true 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:36.876 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:37.187 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:37.187 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:37.187 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:17:37.187 00:17:37.187 --- 10.0.0.3 ping statistics --- 00:17:37.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.187 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:17:37.187 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:37.187 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:37.187 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 00:17:37.187 00:17:37.187 --- 10.0.0.4 ping statistics --- 00:17:37.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.187 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:17:37.187 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:37.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:37.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms 00:17:37.187 00:17:37.187 --- 10.0.0.1 ping statistics --- 00:17:37.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.187 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 00:17:37.187 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:37.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:37.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:17:37.187 00:17:37.187 --- 10.0.0.2 ping statistics --- 00:17:37.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.187 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:17:37.187 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:37.188 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@461 -- # return 0 00:17:37.188 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:37.188 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:37.188 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:37.188 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:37.188 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:37.188 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:37.188 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:37.188 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:37.188 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:37.188 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:37.188 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:37.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.188 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=90695 00:17:37.188 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 90695 00:17:37.188 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 90695 ']' 00:17:37.188 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.188 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:37.188 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.188 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:37.188 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:37.188 06:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:37.188 [2024-11-27 06:19:56.464361] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:17:37.188 [2024-11-27 06:19:56.464406] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.188 [2024-11-27 06:19:56.576835] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:37.188 [2024-11-27 06:19:56.600401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:37.188 [2024-11-27 06:19:56.617816] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:37.188 [2024-11-27 06:19:56.617967] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:37.188 [2024-11-27 06:19:56.618019] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:37.188 [2024-11-27 06:19:56.618040] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:37.188 [2024-11-27 06:19:56.618079] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:37.188 [2024-11-27 06:19:56.618654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.188 [2024-11-27 06:19:56.618886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:37.188 [2024-11-27 06:19:56.619256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.188 [2024-11-27 06:19:56.619147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:38.120 06:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:38.120 06:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:17:38.120 06:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:38.120 06:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:38.120 06:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:38.120 06:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.120 06:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:38.120 06:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode15567 00:17:38.120 [2024-11-27 06:19:57.616774] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:38.120 06:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/11/27 06:19:57 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode15567 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:17:38.120 request: 00:17:38.120 { 00:17:38.120 "method": "nvmf_create_subsystem", 00:17:38.120 "params": { 00:17:38.120 "nqn": "nqn.2016-06.io.spdk:cnode15567", 00:17:38.120 "tgt_name": "foobar" 00:17:38.120 } 00:17:38.120 } 00:17:38.120 Got JSON-RPC error response 00:17:38.120 GoRPCClient: error on JSON-RPC call' 00:17:38.120 06:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/11/27 06:19:57 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode15567 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:17:38.120 request: 00:17:38.120 { 00:17:38.120 "method": "nvmf_create_subsystem", 00:17:38.120 "params": { 00:17:38.120 "nqn": "nqn.2016-06.io.spdk:cnode15567", 00:17:38.120 "tgt_name": "foobar" 00:17:38.120 } 00:17:38.120 } 00:17:38.120 Got JSON-RPC error response 00:17:38.120 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:38.121 06:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:38.121 06:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode5528 00:17:38.378 [2024-11-27 06:19:57.780914] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5528: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:38.378 06:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/11/27 06:19:57 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode5528 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:17:38.378 request: 00:17:38.378 { 00:17:38.378 "method": "nvmf_create_subsystem", 00:17:38.378 "params": { 00:17:38.378 "nqn": "nqn.2016-06.io.spdk:cnode5528", 00:17:38.378 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:17:38.378 } 00:17:38.378 } 00:17:38.378 Got JSON-RPC error response 00:17:38.378 GoRPCClient: error on JSON-RPC call' 00:17:38.378 06:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/11/27 06:19:57 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode5528 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:17:38.378 request: 00:17:38.378 { 00:17:38.378 "method": "nvmf_create_subsystem", 00:17:38.378 "params": { 00:17:38.378 "nqn": "nqn.2016-06.io.spdk:cnode5528", 00:17:38.378 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:17:38.378 } 00:17:38.378 } 00:17:38.378 Got JSON-RPC error response 00:17:38.378 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:38.378 06:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:38.378 06:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode30369 00:17:38.638 [2024-11-27 06:19:57.985054] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30369: invalid model number 'SPDK_Controller' 00:17:38.638 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/11/27 06:19:57 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode30369], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:17:38.638 request: 00:17:38.638 { 00:17:38.638 "method": "nvmf_create_subsystem", 00:17:38.638 "params": { 00:17:38.638 "nqn": "nqn.2016-06.io.spdk:cnode30369", 00:17:38.638 "model_number": "SPDK_Controller\u001f" 00:17:38.638 } 00:17:38.638 } 00:17:38.638 Got JSON-RPC error response 00:17:38.638 GoRPCClient: error on JSON-RPC call' 00:17:38.638 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/11/27 06:19:57 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode30369], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:17:38.638 request: 00:17:38.638 { 00:17:38.638 "method": "nvmf_create_subsystem", 00:17:38.638 "params": { 00:17:38.638 "nqn": "nqn.2016-06.io.spdk:cnode30369", 00:17:38.638 "model_number": "SPDK_Controller\u001f" 00:17:38.638 } 00:17:38.638 } 00:17:38.638 Got JSON-RPC error response 00:17:38.638 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:38.638 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:38.638 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:38.638 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:38.638 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:38.638 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:38.638 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:38.638 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.638 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:17:38.638 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:17:38.638 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:17:38.638 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.638 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.638 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:17:38.638 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:17:38.638 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:17:38.638 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.638 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.638 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:17:38.638 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:17:38.638 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:17:38.638 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.638 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.638 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:17:38.638 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:17:38.638 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:17:38.638 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.638 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.638 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:17:38.638 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:17:38.638 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ " == \- ]] 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '"u/U>-6PA8x+e_F?1r-D\' 00:17:38.639 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s '"u/U>-6PA8x+e_F?1r-D\' nqn.2016-06.io.spdk:cnode22148 00:17:38.898 [2024-11-27 06:19:58.221211] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22148: invalid serial number '"u/U>-6PA8x+e_F?1r-D\' 00:17:38.898 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/11/27 06:19:58 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode22148 serial_number:"u/U>-6PA8x+e_F?1r-D\], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN "u/U>-6PA8x+e_F?1r-D\ 00:17:38.898 request: 00:17:38.898 { 00:17:38.898 "method": "nvmf_create_subsystem", 00:17:38.898 "params": { 00:17:38.898 "nqn": "nqn.2016-06.io.spdk:cnode22148", 00:17:38.898 "serial_number": "\"u/U>-6PA8x+e_F?1r-D\\" 00:17:38.898 } 00:17:38.898 } 00:17:38.898 Got JSON-RPC error response 00:17:38.898 GoRPCClient: error on JSON-RPC call' 00:17:38.898 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/11/27 06:19:58 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode22148 serial_number:"u/U>-6PA8x+e_F?1r-D\], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN "u/U>-6PA8x+e_F?1r-D\ 00:17:38.898 request: 00:17:38.898 { 00:17:38.898 "method": "nvmf_create_subsystem", 00:17:38.898 "params": { 00:17:38.898 "nqn": "nqn.2016-06.io.spdk:cnode22148", 00:17:38.898 "serial_number": "\"u/U>-6PA8x+e_F?1r-D\\" 00:17:38.898 } 00:17:38.898 } 00:17:38.898 Got JSON-RPC error response 00:17:38.898 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:38.898 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:38.898 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:38.898 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:38.898 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:38.898 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:38.898 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:38.898 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.898 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:17:38.898 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:17:38.898 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:17:38.898 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.898 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.898 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:38.898 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:38.898 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:38.898 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.898 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.898 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:17:38.898 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:17:38.898 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:17:38.899 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ C == \- ]] 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'C~"v2+v[og<&v :wyQ8vig6[gVdcy,Eb..KbgE>9' 00:17:38.900 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'C~"v2+v[og<&v :wyQ8vig6[gVdcy,Eb..KbgE>9' nqn.2016-06.io.spdk:cnode22652 00:17:39.158 [2024-11-27 06:19:58.529448] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22652: invalid model number 'C~"v2+v[og<&v :wyQ8vig6[gVdcy,Eb..KbgE>9' 00:17:39.158 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/11/27 06:19:58 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:C~"v2+v[og<&v :wyQ8vig6[gVdcy,Eb..KbgE>9 nqn:nqn.2016-06.io.spdk:cnode22652], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN C~"v2+v[og<&v :wyQ8vig6[gVdcy,Eb..KbgE>9 00:17:39.158 request: 00:17:39.158 { 00:17:39.158 "method": "nvmf_create_subsystem", 00:17:39.158 "params": { 00:17:39.158 "nqn": "nqn.2016-06.io.spdk:cnode22652", 00:17:39.158 "model_number": "C~\"v2+v[og<&v :wyQ8vig6[gVd\u007fcy,Eb..KbgE>9" 00:17:39.158 } 00:17:39.158 } 00:17:39.158 Got JSON-RPC error response 00:17:39.158 GoRPCClient: error on JSON-RPC call' 00:17:39.158 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/11/27 06:19:58 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:C~"v2+v[og<&v :wyQ8vig6[gVdcy,Eb..KbgE>9 nqn:nqn.2016-06.io.spdk:cnode22652], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN C~"v2+v[og<&v :wyQ8vig6[gVdcy,Eb..KbgE>9 00:17:39.158 request: 00:17:39.158 { 00:17:39.158 "method": "nvmf_create_subsystem", 00:17:39.158 "params": { 00:17:39.158 "nqn": "nqn.2016-06.io.spdk:cnode22652", 00:17:39.158 "model_number": "C~\"v2+v[og<&v :wyQ8vig6[gVd\u007fcy,Eb..KbgE>9" 00:17:39.158 } 00:17:39.158 } 00:17:39.158 Got JSON-RPC error response 00:17:39.158 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:39.158 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:39.158 [2024-11-27 06:19:58.685623] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:39.416 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:39.416 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:39.416 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:39.416 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:39.416 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:39.416 06:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:39.674 [2024-11-27 06:19:59.113892] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:39.674 06:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/11/27 06:19:59 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:17:39.674 request: 00:17:39.674 { 00:17:39.674 "method": "nvmf_subsystem_remove_listener", 00:17:39.674 "params": { 00:17:39.674 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:39.674 "listen_address": { 00:17:39.674 "trtype": "tcp", 00:17:39.674 "traddr": "", 00:17:39.674 "trsvcid": "4421" 00:17:39.674 } 00:17:39.674 } 00:17:39.674 } 00:17:39.674 Got JSON-RPC error response 00:17:39.674 GoRPCClient: error on JSON-RPC call' 00:17:39.674 06:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/11/27 06:19:59 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:17:39.674 request: 00:17:39.674 { 00:17:39.674 "method": "nvmf_subsystem_remove_listener", 00:17:39.674 "params": { 00:17:39.674 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:39.674 "listen_address": { 00:17:39.674 "trtype": "tcp", 00:17:39.674 "traddr": "", 00:17:39.674 "trsvcid": "4421" 00:17:39.674 } 00:17:39.674 } 00:17:39.674 } 00:17:39.674 Got JSON-RPC error response 00:17:39.674 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:39.674 06:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7720 -i 0 00:17:39.938 [2024-11-27 06:19:59.326023] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7720: invalid cntlid range [0-65519] 00:17:39.938 06:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/11/27 06:19:59 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode7720], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:17:39.938 request: 00:17:39.938 { 00:17:39.938 "method": "nvmf_create_subsystem", 00:17:39.938 "params": { 00:17:39.938 "nqn": "nqn.2016-06.io.spdk:cnode7720", 00:17:39.938 "min_cntlid": 0 00:17:39.938 } 00:17:39.938 } 00:17:39.938 Got JSON-RPC error response 00:17:39.938 GoRPCClient: error on JSON-RPC call' 00:17:39.938 06:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/11/27 06:19:59 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode7720], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:17:39.938 request: 00:17:39.938 { 00:17:39.938 "method": "nvmf_create_subsystem", 00:17:39.938 "params": { 00:17:39.938 "nqn": "nqn.2016-06.io.spdk:cnode7720", 00:17:39.938 "min_cntlid": 0 00:17:39.938 } 00:17:39.938 } 00:17:39.938 Got JSON-RPC error response 00:17:39.938 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:39.938 06:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20874 -i 65520 00:17:40.196 [2024-11-27 06:19:59.530169] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20874: invalid cntlid range [65520-65519] 00:17:40.196 06:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/11/27 06:19:59 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode20874], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:17:40.196 request: 00:17:40.196 { 00:17:40.196 "method": "nvmf_create_subsystem", 00:17:40.196 "params": { 00:17:40.196 "nqn": "nqn.2016-06.io.spdk:cnode20874", 00:17:40.196 "min_cntlid": 65520 00:17:40.196 } 00:17:40.196 } 00:17:40.196 Got JSON-RPC error response 00:17:40.196 GoRPCClient: error on JSON-RPC call' 00:17:40.196 06:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/11/27 06:19:59 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode20874], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:17:40.196 request: 00:17:40.196 { 00:17:40.196 "method": "nvmf_create_subsystem", 00:17:40.196 "params": { 00:17:40.196 "nqn": "nqn.2016-06.io.spdk:cnode20874", 00:17:40.196 "min_cntlid": 65520 00:17:40.196 } 00:17:40.196 } 00:17:40.196 Got JSON-RPC error response 00:17:40.196 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:40.197 06:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5195 -I 0 00:17:40.454 [2024-11-27 06:19:59.738320] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5195: invalid cntlid range [1-0] 00:17:40.454 06:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/11/27 06:19:59 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode5195], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:17:40.454 request: 00:17:40.454 { 00:17:40.454 "method": "nvmf_create_subsystem", 00:17:40.454 "params": { 00:17:40.454 "nqn": "nqn.2016-06.io.spdk:cnode5195", 00:17:40.454 "max_cntlid": 0 00:17:40.454 } 00:17:40.454 } 00:17:40.454 Got JSON-RPC error response 00:17:40.454 GoRPCClient: error on JSON-RPC call' 00:17:40.454 06:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/11/27 06:19:59 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode5195], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:17:40.454 request: 00:17:40.454 { 00:17:40.454 "method": "nvmf_create_subsystem", 00:17:40.454 "params": { 00:17:40.454 "nqn": "nqn.2016-06.io.spdk:cnode5195", 00:17:40.454 "max_cntlid": 0 00:17:40.454 } 00:17:40.454 } 00:17:40.454 Got JSON-RPC error response 00:17:40.454 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:40.454 06:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29495 -I 65520 00:17:40.710 [2024-11-27 06:20:00.022521] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29495: invalid cntlid range [1-65520] 00:17:40.710 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/11/27 06:20:00 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode29495], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:17:40.710 request: 00:17:40.710 { 00:17:40.710 "method": "nvmf_create_subsystem", 00:17:40.710 "params": { 00:17:40.710 "nqn": "nqn.2016-06.io.spdk:cnode29495", 00:17:40.710 "max_cntlid": 65520 00:17:40.710 } 00:17:40.710 } 00:17:40.710 Got JSON-RPC error response 00:17:40.710 GoRPCClient: error on JSON-RPC call' 00:17:40.710 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/11/27 06:20:00 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode29495], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:17:40.710 request: 00:17:40.710 { 00:17:40.710 "method": "nvmf_create_subsystem", 00:17:40.710 "params": { 00:17:40.710 "nqn": "nqn.2016-06.io.spdk:cnode29495", 00:17:40.710 "max_cntlid": 65520 00:17:40.710 } 00:17:40.710 } 00:17:40.710 Got JSON-RPC error response 00:17:40.710 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:40.710 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32231 -i 6 -I 5 00:17:40.710 [2024-11-27 06:20:00.234705] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32231: invalid cntlid range [6-5] 00:17:40.968 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/11/27 06:20:00 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode32231], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:17:40.968 request: 00:17:40.968 { 00:17:40.968 "method": "nvmf_create_subsystem", 00:17:40.968 "params": { 00:17:40.968 "nqn": "nqn.2016-06.io.spdk:cnode32231", 00:17:40.968 "min_cntlid": 6, 00:17:40.968 "max_cntlid": 5 00:17:40.968 } 00:17:40.968 } 00:17:40.968 Got JSON-RPC error response 00:17:40.968 GoRPCClient: error on JSON-RPC call' 00:17:40.968 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/11/27 06:20:00 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode32231], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:17:40.968 request: 00:17:40.968 { 00:17:40.968 "method": "nvmf_create_subsystem", 00:17:40.968 "params": { 00:17:40.968 "nqn": "nqn.2016-06.io.spdk:cnode32231", 00:17:40.968 "min_cntlid": 6, 00:17:40.968 "max_cntlid": 5 00:17:40.968 } 00:17:40.968 } 00:17:40.968 Got JSON-RPC error response 00:17:40.968 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:40.968 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:40.968 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:40.968 { 00:17:40.968 "name": "foobar", 00:17:40.968 "method": "nvmf_delete_target", 00:17:40.968 "req_id": 1 00:17:40.968 } 00:17:40.968 Got JSON-RPC error response 00:17:40.968 response: 00:17:40.968 { 00:17:40.968 "code": -32602, 00:17:40.968 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:40.968 }' 00:17:40.968 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:40.968 { 00:17:40.968 "name": "foobar", 00:17:40.968 "method": "nvmf_delete_target", 00:17:40.968 "req_id": 1 00:17:40.968 } 00:17:40.968 Got JSON-RPC error response 00:17:40.968 response: 00:17:40.968 { 00:17:40.968 "code": -32602, 00:17:40.968 "message": "The specified target doesn't exist, cannot delete it." 00:17:40.968 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:40.968 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:40.968 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:40.968 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:40.968 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:40.968 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:40.968 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:40.968 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:40.968 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:40.968 rmmod nvme_tcp 00:17:40.968 rmmod nvme_fabrics 00:17:40.968 rmmod nvme_keyring 00:17:40.968 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:40.968 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:40.968 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:40.968 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 90695 ']' 00:17:40.968 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 90695 00:17:40.968 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 90695 ']' 00:17:40.968 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 90695 00:17:40.968 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:17:40.968 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:40.968 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90695 00:17:40.968 killing process with pid 90695 00:17:40.968 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:40.968 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:40.968 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90695' 00:17:40.968 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 90695 00:17:40.968 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 90695 00:17:41.226 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:41.226 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:41.226 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:41.226 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:17:41.226 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:17:41.226 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:41.226 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:17:41.226 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:41.226 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:41.226 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:41.226 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:41.226 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:41.226 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:41.227 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:41.227 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:41.227 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:41.227 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:41.227 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:41.227 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:41.227 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:41.227 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:41.227 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:41.227 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:41.227 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.227 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:41.227 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.486 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@300 -- # return 0 00:17:41.486 00:17:41.486 real 0m4.796s 00:17:41.486 user 0m18.457s 00:17:41.486 sys 0m0.937s 00:17:41.486 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:41.486 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:41.486 ************************************ 00:17:41.486 END TEST nvmf_invalid 00:17:41.486 ************************************ 00:17:41.486 06:20:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:41.486 06:20:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:41.486 06:20:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:41.486 06:20:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:41.486 ************************************ 00:17:41.486 START TEST nvmf_connect_stress 00:17:41.486 ************************************ 00:17:41.486 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:41.486 * Looking for test storage... 00:17:41.486 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:41.486 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:41.486 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:17:41.486 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:41.486 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:41.486 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:41.486 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:41.486 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:41.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.487 --rc genhtml_branch_coverage=1 00:17:41.487 --rc genhtml_function_coverage=1 00:17:41.487 --rc genhtml_legend=1 00:17:41.487 --rc geninfo_all_blocks=1 00:17:41.487 --rc geninfo_unexecuted_blocks=1 00:17:41.487 00:17:41.487 ' 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:41.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.487 --rc genhtml_branch_coverage=1 00:17:41.487 --rc genhtml_function_coverage=1 00:17:41.487 --rc genhtml_legend=1 00:17:41.487 --rc geninfo_all_blocks=1 00:17:41.487 --rc geninfo_unexecuted_blocks=1 00:17:41.487 00:17:41.487 ' 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:41.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.487 --rc genhtml_branch_coverage=1 00:17:41.487 --rc genhtml_function_coverage=1 00:17:41.487 --rc genhtml_legend=1 00:17:41.487 --rc geninfo_all_blocks=1 00:17:41.487 --rc geninfo_unexecuted_blocks=1 00:17:41.487 00:17:41.487 ' 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:41.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.487 --rc genhtml_branch_coverage=1 00:17:41.487 --rc genhtml_function_coverage=1 00:17:41.487 --rc genhtml_legend=1 00:17:41.487 --rc geninfo_all_blocks=1 00:17:41.487 --rc geninfo_unexecuted_blocks=1 00:17:41.487 00:17:41.487 ' 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:41.487 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:41.487 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:41.488 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.488 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:41.488 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.488 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:41.488 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:41.488 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:41.488 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:41.488 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:41.488 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:41.488 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:41.488 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:41.488 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:41.488 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:41.488 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:41.488 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:41.488 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:41.488 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:41.488 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:41.488 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:41.488 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:41.488 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:41.488 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:41.488 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:41.488 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:41.488 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:41.488 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:41.488 Cannot find device "nvmf_init_br" 00:17:41.488 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:17:41.488 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:41.488 Cannot find device "nvmf_init_br2" 00:17:41.488 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:17:41.488 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:41.488 Cannot find device "nvmf_tgt_br" 00:17:41.488 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # true 00:17:41.488 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:41.488 Cannot find device "nvmf_tgt_br2" 00:17:41.488 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # true 00:17:41.488 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:41.488 Cannot find device "nvmf_init_br" 00:17:41.488 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # true 00:17:41.488 06:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:41.488 Cannot find device "nvmf_init_br2" 00:17:41.488 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # true 00:17:41.488 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:41.746 Cannot find device "nvmf_tgt_br" 00:17:41.746 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # true 00:17:41.746 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:41.746 Cannot find device "nvmf_tgt_br2" 00:17:41.746 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # true 00:17:41.746 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:41.746 Cannot find device "nvmf_br" 00:17:41.746 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # true 00:17:41.746 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:41.746 Cannot find device "nvmf_init_if" 00:17:41.746 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # true 00:17:41.746 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:41.746 Cannot find device "nvmf_init_if2" 00:17:41.746 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # true 00:17:41.746 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:41.746 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:41.746 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # true 00:17:41.746 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:41.746 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:41.746 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # true 00:17:41.746 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:41.746 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:41.746 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:41.746 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:41.747 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:41.747 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:17:41.747 00:17:41.747 --- 10.0.0.3 ping statistics --- 00:17:41.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.747 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:41.747 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:41.747 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:17:41.747 00:17:41.747 --- 10.0.0.4 ping statistics --- 00:17:41.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.747 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:41.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:41.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:17:41.747 00:17:41.747 --- 10.0.0.1 ping statistics --- 00:17:41.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.747 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:41.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:41.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.033 ms 00:17:41.747 00:17:41.747 --- 10.0.0.2 ping statistics --- 00:17:41.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.747 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@461 -- # return 0 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:41.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=91236 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 91236 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 91236 ']' 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:41.747 06:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:41.747 [2024-11-27 06:20:01.273720] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:17:41.747 [2024-11-27 06:20:01.273782] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.006 [2024-11-27 06:20:01.387961] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:42.006 [2024-11-27 06:20:01.409346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:42.006 [2024-11-27 06:20:01.432887] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:42.006 [2024-11-27 06:20:01.433047] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:42.006 [2024-11-27 06:20:01.433114] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:42.006 [2024-11-27 06:20:01.433170] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:42.006 [2024-11-27 06:20:01.433194] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:42.006 [2024-11-27 06:20:01.434077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:42.006 [2024-11-27 06:20:01.434138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:42.006 [2024-11-27 06:20:01.434140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:42.938 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:42.938 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:42.939 [2024-11-27 06:20:02.174359] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:42.939 [2024-11-27 06:20:02.190619] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:42.939 NULL1 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=91288 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91288 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.939 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:43.197 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.197 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91288 00:17:43.197 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:43.197 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.197 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:43.455 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.455 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91288 00:17:43.455 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:43.455 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.455 06:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:43.713 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.713 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91288 00:17:43.713 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:43.713 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.713 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.279 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.280 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91288 00:17:44.280 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:44.280 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.280 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.538 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.538 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91288 00:17:44.538 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:44.538 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.538 06:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.796 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.796 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91288 00:17:44.796 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:44.796 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.796 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.053 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.053 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91288 00:17:45.053 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:45.053 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.053 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.312 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.312 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91288 00:17:45.312 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:45.312 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.312 06:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.877 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.877 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91288 00:17:45.877 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:45.877 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.877 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.136 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.136 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91288 00:17:46.136 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.136 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.136 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.395 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.395 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91288 00:17:46.395 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.395 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.395 06:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.653 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.653 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91288 00:17:46.653 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.653 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.653 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.911 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.911 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91288 00:17:46.911 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.911 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.911 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.478 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.478 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91288 00:17:47.478 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.478 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.478 06:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.736 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.736 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91288 00:17:47.736 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.736 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.736 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.993 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.993 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91288 00:17:47.994 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.994 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.994 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.252 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.252 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91288 00:17:48.252 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.252 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.252 06:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.509 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.509 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91288 00:17:48.509 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.509 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.509 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.075 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.075 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91288 00:17:49.075 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.075 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.075 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.334 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.334 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91288 00:17:49.334 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.334 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.334 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.593 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.593 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91288 00:17:49.593 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.593 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.593 06:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.851 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.851 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91288 00:17:49.851 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.851 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.851 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.109 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.109 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91288 00:17:50.109 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.109 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.109 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.675 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.675 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91288 00:17:50.675 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.675 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.675 06:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.933 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.933 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91288 00:17:50.933 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.933 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.933 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.191 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.191 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91288 00:17:51.191 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.191 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.191 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.449 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.449 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91288 00:17:51.449 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.449 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.449 06:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.042 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.042 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91288 00:17:52.042 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.042 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.042 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.042 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.042 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91288 00:17:52.042 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.042 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.042 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.606 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.606 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91288 00:17:52.606 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.607 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.607 06:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.864 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.864 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91288 00:17:52.864 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.864 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.864 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.122 Testing NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:53.122 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.122 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 91288 00:17:53.122 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (91288) - No such process 00:17:53.122 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 91288 00:17:53.122 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:17:53.122 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:53.122 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:53.122 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:53.122 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:53.122 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:53.122 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:53.122 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:53.122 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:53.122 rmmod nvme_tcp 00:17:53.122 rmmod nvme_fabrics 00:17:53.122 rmmod nvme_keyring 00:17:53.122 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:53.122 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:53.122 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:53.122 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 91236 ']' 00:17:53.122 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 91236 00:17:53.122 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 91236 ']' 00:17:53.122 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 91236 00:17:53.122 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:17:53.122 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.122 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91236 00:17:53.381 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:53.381 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:53.381 killing process with pid 91236 00:17:53.381 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91236' 00:17:53.381 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 91236 00:17:53.381 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 91236 00:17:53.381 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:53.381 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:53.381 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:53.381 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:53.381 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:53.381 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:17:53.381 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:17:53.381 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:53.381 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:53.381 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:53.381 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:53.381 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:53.381 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:53.381 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:53.381 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:53.381 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:53.381 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:53.381 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:53.381 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:53.639 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:53.639 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:53.639 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:53.639 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:53.639 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.639 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:53.639 06:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.639 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@300 -- # return 0 00:17:53.639 00:17:53.639 real 0m12.202s 00:17:53.639 user 0m41.158s 00:17:53.639 sys 0m2.574s 00:17:53.639 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:53.639 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.639 ************************************ 00:17:53.639 END TEST nvmf_connect_stress 00:17:53.639 ************************************ 00:17:53.639 06:20:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:53.639 06:20:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:53.639 06:20:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:53.639 06:20:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:53.639 ************************************ 00:17:53.639 START TEST nvmf_fused_ordering 00:17:53.639 ************************************ 00:17:53.639 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:53.639 * Looking for test storage... 00:17:53.639 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:53.639 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:53.639 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:17:53.639 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:53.639 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:53.639 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:53.639 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:53.639 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:53.639 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:53.639 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:53.639 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:53.639 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:53.639 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:53.640 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:53.640 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:53.640 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:53.640 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:53.640 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:53.640 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:53.640 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:53.898 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:53.898 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:53.898 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:53.898 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:53.898 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:53.898 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:53.898 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:53.898 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:53.898 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:53.898 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:53.898 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:53.898 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:53.898 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:53.898 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:53.898 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:53.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.898 --rc genhtml_branch_coverage=1 00:17:53.898 --rc genhtml_function_coverage=1 00:17:53.898 --rc genhtml_legend=1 00:17:53.898 --rc geninfo_all_blocks=1 00:17:53.898 --rc geninfo_unexecuted_blocks=1 00:17:53.898 00:17:53.898 ' 00:17:53.898 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:53.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.899 --rc genhtml_branch_coverage=1 00:17:53.899 --rc genhtml_function_coverage=1 00:17:53.899 --rc genhtml_legend=1 00:17:53.899 --rc geninfo_all_blocks=1 00:17:53.899 --rc geninfo_unexecuted_blocks=1 00:17:53.899 00:17:53.899 ' 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:53.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.899 --rc genhtml_branch_coverage=1 00:17:53.899 --rc genhtml_function_coverage=1 00:17:53.899 --rc genhtml_legend=1 00:17:53.899 --rc geninfo_all_blocks=1 00:17:53.899 --rc geninfo_unexecuted_blocks=1 00:17:53.899 00:17:53.899 ' 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:53.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.899 --rc genhtml_branch_coverage=1 00:17:53.899 --rc genhtml_function_coverage=1 00:17:53.899 --rc genhtml_legend=1 00:17:53.899 --rc geninfo_all_blocks=1 00:17:53.899 --rc geninfo_unexecuted_blocks=1 00:17:53.899 00:17:53.899 ' 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:53.899 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:53.899 Cannot find device "nvmf_init_br" 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:53.899 Cannot find device "nvmf_init_br2" 00:17:53.899 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:53.900 Cannot find device "nvmf_tgt_br" 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # true 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:53.900 Cannot find device "nvmf_tgt_br2" 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # true 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:53.900 Cannot find device "nvmf_init_br" 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # true 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:53.900 Cannot find device "nvmf_init_br2" 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # true 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:53.900 Cannot find device "nvmf_tgt_br" 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # true 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:53.900 Cannot find device "nvmf_tgt_br2" 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # true 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:53.900 Cannot find device "nvmf_br" 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # true 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:53.900 Cannot find device "nvmf_init_if" 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # true 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:53.900 Cannot find device "nvmf_init_if2" 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # true 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:53.900 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # true 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:53.900 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # true 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:53.900 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:54.158 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:54.158 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:54.158 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:54.158 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:54.158 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:54.158 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:54.158 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:54.158 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:54.158 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:54.158 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:54.158 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:54.158 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:54.158 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:54.158 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:54.158 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:17:54.158 00:17:54.158 --- 10.0.0.3 ping statistics --- 00:17:54.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.158 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:17:54.158 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:54.158 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:54.158 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:17:54.158 00:17:54.158 --- 10.0.0.4 ping statistics --- 00:17:54.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.158 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:54.158 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:54.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:54.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:17:54.158 00:17:54.158 --- 10.0.0.1 ping statistics --- 00:17:54.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.158 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:17:54.158 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:54.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:54.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:17:54.158 00:17:54.158 --- 10.0.0.2 ping statistics --- 00:17:54.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.158 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:17:54.158 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:54.158 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@461 -- # return 0 00:17:54.158 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:54.158 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:54.158 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:54.158 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:54.158 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:54.158 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:54.158 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:54.158 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:54.158 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:54.158 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:54.158 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:54.158 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=91666 00:17:54.158 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 91666 00:17:54.158 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 91666 ']' 00:17:54.158 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.158 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.158 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.158 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.158 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:54.158 06:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:54.158 [2024-11-27 06:20:13.552670] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:17:54.158 [2024-11-27 06:20:13.552739] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:54.158 [2024-11-27 06:20:13.667470] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:54.415 [2024-11-27 06:20:13.695268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.415 [2024-11-27 06:20:13.711137] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:54.415 [2024-11-27 06:20:13.711167] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:54.415 [2024-11-27 06:20:13.711172] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:54.415 [2024-11-27 06:20:13.711176] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:54.415 [2024-11-27 06:20:13.711179] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:54.415 [2024-11-27 06:20:13.711384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.979 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:54.979 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:17:54.979 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:54.979 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:54.979 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:54.979 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:54.979 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:54.979 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.979 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:54.979 [2024-11-27 06:20:14.451182] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:54.979 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.979 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:54.979 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.979 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:54.979 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.979 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:54.979 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.980 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:54.980 [2024-11-27 06:20:14.467240] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:54.980 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.980 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:54.980 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.980 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:54.980 NULL1 00:17:54.980 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.980 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:54.980 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.980 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:54.980 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.980 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:54.980 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.980 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:54.980 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.980 06:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:55.237 [2024-11-27 06:20:14.515458] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:17:55.237 [2024-11-27 06:20:14.515490] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91716 ] 00:17:55.237 [2024-11-27 06:20:14.627213] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:55.496 Attached to nqn.2016-06.io.spdk:cnode1 00:17:55.496 Namespace ID: 1 size: 1GB 00:17:55.496 fused_ordering(0) 00:17:55.496 fused_ordering(1) 00:17:55.496 fused_ordering(2) 00:17:55.496 fused_ordering(3) 00:17:55.496 fused_ordering(4) 00:17:55.496 fused_ordering(5) 00:17:55.496 fused_ordering(6) 00:17:55.496 fused_ordering(7) 00:17:55.496 fused_ordering(8) 00:17:55.496 fused_ordering(9) 00:17:55.496 fused_ordering(10) 00:17:55.496 fused_ordering(11) 00:17:55.496 fused_ordering(12) 00:17:55.496 fused_ordering(13) 00:17:55.496 fused_ordering(14) 00:17:55.496 fused_ordering(15) 00:17:55.496 fused_ordering(16) 00:17:55.496 fused_ordering(17) 00:17:55.496 fused_ordering(18) 00:17:55.496 fused_ordering(19) 00:17:55.496 fused_ordering(20) 00:17:55.496 fused_ordering(21) 00:17:55.496 fused_ordering(22) 00:17:55.496 fused_ordering(23) 00:17:55.496 fused_ordering(24) 00:17:55.496 fused_ordering(25) 00:17:55.496 fused_ordering(26) 00:17:55.496 fused_ordering(27) 00:17:55.496 fused_ordering(28) 00:17:55.496 fused_ordering(29) 00:17:55.496 fused_ordering(30) 00:17:55.496 fused_ordering(31) 00:17:55.496 fused_ordering(32) 00:17:55.496 fused_ordering(33) 00:17:55.496 fused_ordering(34) 00:17:55.496 fused_ordering(35) 00:17:55.496 fused_ordering(36) 00:17:55.496 fused_ordering(37) 00:17:55.496 fused_ordering(38) 00:17:55.496 fused_ordering(39) 00:17:55.496 fused_ordering(40) 00:17:55.496 fused_ordering(41) 00:17:55.496 fused_ordering(42) 00:17:55.496 fused_ordering(43) 00:17:55.496 fused_ordering(44) 00:17:55.496 fused_ordering(45) 00:17:55.496 fused_ordering(46) 00:17:55.496 fused_ordering(47) 00:17:55.496 fused_ordering(48) 00:17:55.496 fused_ordering(49) 00:17:55.496 fused_ordering(50) 00:17:55.496 fused_ordering(51) 00:17:55.496 fused_ordering(52) 00:17:55.496 fused_ordering(53) 00:17:55.496 fused_ordering(54) 00:17:55.496 fused_ordering(55) 00:17:55.496 fused_ordering(56) 00:17:55.496 fused_ordering(57) 00:17:55.496 fused_ordering(58) 00:17:55.496 fused_ordering(59) 00:17:55.496 fused_ordering(60) 00:17:55.496 fused_ordering(61) 00:17:55.496 fused_ordering(62) 00:17:55.496 fused_ordering(63) 00:17:55.496 fused_ordering(64) 00:17:55.496 fused_ordering(65) 00:17:55.496 fused_ordering(66) 00:17:55.496 fused_ordering(67) 00:17:55.496 fused_ordering(68) 00:17:55.496 fused_ordering(69) 00:17:55.496 fused_ordering(70) 00:17:55.496 fused_ordering(71) 00:17:55.496 fused_ordering(72) 00:17:55.496 fused_ordering(73) 00:17:55.496 fused_ordering(74) 00:17:55.496 fused_ordering(75) 00:17:55.496 fused_ordering(76) 00:17:55.496 fused_ordering(77) 00:17:55.496 fused_ordering(78) 00:17:55.496 fused_ordering(79) 00:17:55.496 fused_ordering(80) 00:17:55.496 fused_ordering(81) 00:17:55.496 fused_ordering(82) 00:17:55.496 fused_ordering(83) 00:17:55.496 fused_ordering(84) 00:17:55.496 fused_ordering(85) 00:17:55.496 fused_ordering(86) 00:17:55.496 fused_ordering(87) 00:17:55.496 fused_ordering(88) 00:17:55.496 fused_ordering(89) 00:17:55.496 fused_ordering(90) 00:17:55.496 fused_ordering(91) 00:17:55.496 fused_ordering(92) 00:17:55.496 fused_ordering(93) 00:17:55.496 fused_ordering(94) 00:17:55.496 fused_ordering(95) 00:17:55.496 fused_ordering(96) 00:17:55.496 fused_ordering(97) 00:17:55.496 fused_ordering(98) 00:17:55.496 fused_ordering(99) 00:17:55.496 fused_ordering(100) 00:17:55.496 fused_ordering(101) 00:17:55.496 fused_ordering(102) 00:17:55.496 fused_ordering(103) 00:17:55.496 fused_ordering(104) 00:17:55.496 fused_ordering(105) 00:17:55.496 fused_ordering(106) 00:17:55.496 fused_ordering(107) 00:17:55.496 fused_ordering(108) 00:17:55.496 fused_ordering(109) 00:17:55.496 fused_ordering(110) 00:17:55.496 fused_ordering(111) 00:17:55.496 fused_ordering(112) 00:17:55.496 fused_ordering(113) 00:17:55.496 fused_ordering(114) 00:17:55.496 fused_ordering(115) 00:17:55.496 fused_ordering(116) 00:17:55.496 fused_ordering(117) 00:17:55.496 fused_ordering(118) 00:17:55.496 fused_ordering(119) 00:17:55.496 fused_ordering(120) 00:17:55.496 fused_ordering(121) 00:17:55.496 fused_ordering(122) 00:17:55.496 fused_ordering(123) 00:17:55.496 fused_ordering(124) 00:17:55.496 fused_ordering(125) 00:17:55.496 fused_ordering(126) 00:17:55.496 fused_ordering(127) 00:17:55.496 fused_ordering(128) 00:17:55.496 fused_ordering(129) 00:17:55.496 fused_ordering(130) 00:17:55.496 fused_ordering(131) 00:17:55.496 fused_ordering(132) 00:17:55.496 fused_ordering(133) 00:17:55.496 fused_ordering(134) 00:17:55.496 fused_ordering(135) 00:17:55.496 fused_ordering(136) 00:17:55.496 fused_ordering(137) 00:17:55.496 fused_ordering(138) 00:17:55.496 fused_ordering(139) 00:17:55.496 fused_ordering(140) 00:17:55.496 fused_ordering(141) 00:17:55.496 fused_ordering(142) 00:17:55.496 fused_ordering(143) 00:17:55.496 fused_ordering(144) 00:17:55.496 fused_ordering(145) 00:17:55.496 fused_ordering(146) 00:17:55.496 fused_ordering(147) 00:17:55.496 fused_ordering(148) 00:17:55.496 fused_ordering(149) 00:17:55.496 fused_ordering(150) 00:17:55.496 fused_ordering(151) 00:17:55.496 fused_ordering(152) 00:17:55.496 fused_ordering(153) 00:17:55.496 fused_ordering(154) 00:17:55.496 fused_ordering(155) 00:17:55.496 fused_ordering(156) 00:17:55.496 fused_ordering(157) 00:17:55.496 fused_ordering(158) 00:17:55.496 fused_ordering(159) 00:17:55.496 fused_ordering(160) 00:17:55.496 fused_ordering(161) 00:17:55.496 fused_ordering(162) 00:17:55.496 fused_ordering(163) 00:17:55.496 fused_ordering(164) 00:17:55.496 fused_ordering(165) 00:17:55.496 fused_ordering(166) 00:17:55.496 fused_ordering(167) 00:17:55.496 fused_ordering(168) 00:17:55.496 fused_ordering(169) 00:17:55.496 fused_ordering(170) 00:17:55.496 fused_ordering(171) 00:17:55.496 fused_ordering(172) 00:17:55.496 fused_ordering(173) 00:17:55.496 fused_ordering(174) 00:17:55.496 fused_ordering(175) 00:17:55.496 fused_ordering(176) 00:17:55.496 fused_ordering(177) 00:17:55.496 fused_ordering(178) 00:17:55.496 fused_ordering(179) 00:17:55.497 fused_ordering(180) 00:17:55.497 fused_ordering(181) 00:17:55.497 fused_ordering(182) 00:17:55.497 fused_ordering(183) 00:17:55.497 fused_ordering(184) 00:17:55.497 fused_ordering(185) 00:17:55.497 fused_ordering(186) 00:17:55.497 fused_ordering(187) 00:17:55.497 fused_ordering(188) 00:17:55.497 fused_ordering(189) 00:17:55.497 fused_ordering(190) 00:17:55.497 fused_ordering(191) 00:17:55.497 fused_ordering(192) 00:17:55.497 fused_ordering(193) 00:17:55.497 fused_ordering(194) 00:17:55.497 fused_ordering(195) 00:17:55.497 fused_ordering(196) 00:17:55.497 fused_ordering(197) 00:17:55.497 fused_ordering(198) 00:17:55.497 fused_ordering(199) 00:17:55.497 fused_ordering(200) 00:17:55.497 fused_ordering(201) 00:17:55.497 fused_ordering(202) 00:17:55.497 fused_ordering(203) 00:17:55.497 fused_ordering(204) 00:17:55.497 fused_ordering(205) 00:17:55.755 fused_ordering(206) 00:17:55.755 fused_ordering(207) 00:17:55.755 fused_ordering(208) 00:17:55.755 fused_ordering(209) 00:17:55.755 fused_ordering(210) 00:17:55.755 fused_ordering(211) 00:17:55.755 fused_ordering(212) 00:17:55.755 fused_ordering(213) 00:17:55.755 fused_ordering(214) 00:17:55.755 fused_ordering(215) 00:17:55.755 fused_ordering(216) 00:17:55.755 fused_ordering(217) 00:17:55.755 fused_ordering(218) 00:17:55.755 fused_ordering(219) 00:17:55.755 fused_ordering(220) 00:17:55.755 fused_ordering(221) 00:17:55.755 fused_ordering(222) 00:17:55.755 fused_ordering(223) 00:17:55.755 fused_ordering(224) 00:17:55.755 fused_ordering(225) 00:17:55.755 fused_ordering(226) 00:17:55.755 fused_ordering(227) 00:17:55.755 fused_ordering(228) 00:17:55.755 fused_ordering(229) 00:17:55.755 fused_ordering(230) 00:17:55.755 fused_ordering(231) 00:17:55.755 fused_ordering(232) 00:17:55.755 fused_ordering(233) 00:17:55.755 fused_ordering(234) 00:17:55.755 fused_ordering(235) 00:17:55.755 fused_ordering(236) 00:17:55.755 fused_ordering(237) 00:17:55.755 fused_ordering(238) 00:17:55.755 fused_ordering(239) 00:17:55.755 fused_ordering(240) 00:17:55.755 fused_ordering(241) 00:17:55.755 fused_ordering(242) 00:17:55.755 fused_ordering(243) 00:17:55.755 fused_ordering(244) 00:17:55.755 fused_ordering(245) 00:17:55.755 fused_ordering(246) 00:17:55.755 fused_ordering(247) 00:17:55.755 fused_ordering(248) 00:17:55.755 fused_ordering(249) 00:17:55.755 fused_ordering(250) 00:17:55.755 fused_ordering(251) 00:17:55.755 fused_ordering(252) 00:17:55.755 fused_ordering(253) 00:17:55.755 fused_ordering(254) 00:17:55.755 fused_ordering(255) 00:17:55.755 fused_ordering(256) 00:17:55.755 fused_ordering(257) 00:17:55.755 fused_ordering(258) 00:17:55.755 fused_ordering(259) 00:17:55.755 fused_ordering(260) 00:17:55.755 fused_ordering(261) 00:17:55.755 fused_ordering(262) 00:17:55.755 fused_ordering(263) 00:17:55.755 fused_ordering(264) 00:17:55.755 fused_ordering(265) 00:17:55.755 fused_ordering(266) 00:17:55.755 fused_ordering(267) 00:17:55.755 fused_ordering(268) 00:17:55.755 fused_ordering(269) 00:17:55.755 fused_ordering(270) 00:17:55.755 fused_ordering(271) 00:17:55.755 fused_ordering(272) 00:17:55.755 fused_ordering(273) 00:17:55.755 fused_ordering(274) 00:17:55.755 fused_ordering(275) 00:17:55.755 fused_ordering(276) 00:17:55.755 fused_ordering(277) 00:17:55.755 fused_ordering(278) 00:17:55.755 fused_ordering(279) 00:17:55.755 fused_ordering(280) 00:17:55.755 fused_ordering(281) 00:17:55.755 fused_ordering(282) 00:17:55.755 fused_ordering(283) 00:17:55.755 fused_ordering(284) 00:17:55.755 fused_ordering(285) 00:17:55.755 fused_ordering(286) 00:17:55.755 fused_ordering(287) 00:17:55.755 fused_ordering(288) 00:17:55.755 fused_ordering(289) 00:17:55.755 fused_ordering(290) 00:17:55.755 fused_ordering(291) 00:17:55.755 fused_ordering(292) 00:17:55.755 fused_ordering(293) 00:17:55.755 fused_ordering(294) 00:17:55.755 fused_ordering(295) 00:17:55.755 fused_ordering(296) 00:17:55.755 fused_ordering(297) 00:17:55.755 fused_ordering(298) 00:17:55.755 fused_ordering(299) 00:17:55.755 fused_ordering(300) 00:17:55.755 fused_ordering(301) 00:17:55.755 fused_ordering(302) 00:17:55.756 fused_ordering(303) 00:17:55.756 fused_ordering(304) 00:17:55.756 fused_ordering(305) 00:17:55.756 fused_ordering(306) 00:17:55.756 fused_ordering(307) 00:17:55.756 fused_ordering(308) 00:17:55.756 fused_ordering(309) 00:17:55.756 fused_ordering(310) 00:17:55.756 fused_ordering(311) 00:17:55.756 fused_ordering(312) 00:17:55.756 fused_ordering(313) 00:17:55.756 fused_ordering(314) 00:17:55.756 fused_ordering(315) 00:17:55.756 fused_ordering(316) 00:17:55.756 fused_ordering(317) 00:17:55.756 fused_ordering(318) 00:17:55.756 fused_ordering(319) 00:17:55.756 fused_ordering(320) 00:17:55.756 fused_ordering(321) 00:17:55.756 fused_ordering(322) 00:17:55.756 fused_ordering(323) 00:17:55.756 fused_ordering(324) 00:17:55.756 fused_ordering(325) 00:17:55.756 fused_ordering(326) 00:17:55.756 fused_ordering(327) 00:17:55.756 fused_ordering(328) 00:17:55.756 fused_ordering(329) 00:17:55.756 fused_ordering(330) 00:17:55.756 fused_ordering(331) 00:17:55.756 fused_ordering(332) 00:17:55.756 fused_ordering(333) 00:17:55.756 fused_ordering(334) 00:17:55.756 fused_ordering(335) 00:17:55.756 fused_ordering(336) 00:17:55.756 fused_ordering(337) 00:17:55.756 fused_ordering(338) 00:17:55.756 fused_ordering(339) 00:17:55.756 fused_ordering(340) 00:17:55.756 fused_ordering(341) 00:17:55.756 fused_ordering(342) 00:17:55.756 fused_ordering(343) 00:17:55.756 fused_ordering(344) 00:17:55.756 fused_ordering(345) 00:17:55.756 fused_ordering(346) 00:17:55.756 fused_ordering(347) 00:17:55.756 fused_ordering(348) 00:17:55.756 fused_ordering(349) 00:17:55.756 fused_ordering(350) 00:17:55.756 fused_ordering(351) 00:17:55.756 fused_ordering(352) 00:17:55.756 fused_ordering(353) 00:17:55.756 fused_ordering(354) 00:17:55.756 fused_ordering(355) 00:17:55.756 fused_ordering(356) 00:17:55.756 fused_ordering(357) 00:17:55.756 fused_ordering(358) 00:17:55.756 fused_ordering(359) 00:17:55.756 fused_ordering(360) 00:17:55.756 fused_ordering(361) 00:17:55.756 fused_ordering(362) 00:17:55.756 fused_ordering(363) 00:17:55.756 fused_ordering(364) 00:17:55.756 fused_ordering(365) 00:17:55.756 fused_ordering(366) 00:17:55.756 fused_ordering(367) 00:17:55.756 fused_ordering(368) 00:17:55.756 fused_ordering(369) 00:17:55.756 fused_ordering(370) 00:17:55.756 fused_ordering(371) 00:17:55.756 fused_ordering(372) 00:17:55.756 fused_ordering(373) 00:17:55.756 fused_ordering(374) 00:17:55.756 fused_ordering(375) 00:17:55.756 fused_ordering(376) 00:17:55.756 fused_ordering(377) 00:17:55.756 fused_ordering(378) 00:17:55.756 fused_ordering(379) 00:17:55.756 fused_ordering(380) 00:17:55.756 fused_ordering(381) 00:17:55.756 fused_ordering(382) 00:17:55.756 fused_ordering(383) 00:17:55.756 fused_ordering(384) 00:17:55.756 fused_ordering(385) 00:17:55.756 fused_ordering(386) 00:17:55.756 fused_ordering(387) 00:17:55.756 fused_ordering(388) 00:17:55.756 fused_ordering(389) 00:17:55.756 fused_ordering(390) 00:17:55.756 fused_ordering(391) 00:17:55.756 fused_ordering(392) 00:17:55.756 fused_ordering(393) 00:17:55.756 fused_ordering(394) 00:17:55.756 fused_ordering(395) 00:17:55.756 fused_ordering(396) 00:17:55.756 fused_ordering(397) 00:17:55.756 fused_ordering(398) 00:17:55.756 fused_ordering(399) 00:17:55.756 fused_ordering(400) 00:17:55.756 fused_ordering(401) 00:17:55.756 fused_ordering(402) 00:17:55.756 fused_ordering(403) 00:17:55.756 fused_ordering(404) 00:17:55.756 fused_ordering(405) 00:17:55.756 fused_ordering(406) 00:17:55.756 fused_ordering(407) 00:17:55.756 fused_ordering(408) 00:17:55.756 fused_ordering(409) 00:17:55.756 fused_ordering(410) 00:17:56.015 fused_ordering(411) 00:17:56.015 fused_ordering(412) 00:17:56.015 fused_ordering(413) 00:17:56.015 fused_ordering(414) 00:17:56.015 fused_ordering(415) 00:17:56.015 fused_ordering(416) 00:17:56.015 fused_ordering(417) 00:17:56.015 fused_ordering(418) 00:17:56.015 fused_ordering(419) 00:17:56.015 fused_ordering(420) 00:17:56.015 fused_ordering(421) 00:17:56.015 fused_ordering(422) 00:17:56.015 fused_ordering(423) 00:17:56.015 fused_ordering(424) 00:17:56.015 fused_ordering(425) 00:17:56.015 fused_ordering(426) 00:17:56.015 fused_ordering(427) 00:17:56.015 fused_ordering(428) 00:17:56.015 fused_ordering(429) 00:17:56.015 fused_ordering(430) 00:17:56.015 fused_ordering(431) 00:17:56.015 fused_ordering(432) 00:17:56.015 fused_ordering(433) 00:17:56.015 fused_ordering(434) 00:17:56.015 fused_ordering(435) 00:17:56.015 fused_ordering(436) 00:17:56.015 fused_ordering(437) 00:17:56.015 fused_ordering(438) 00:17:56.015 fused_ordering(439) 00:17:56.015 fused_ordering(440) 00:17:56.015 fused_ordering(441) 00:17:56.015 fused_ordering(442) 00:17:56.015 fused_ordering(443) 00:17:56.015 fused_ordering(444) 00:17:56.015 fused_ordering(445) 00:17:56.015 fused_ordering(446) 00:17:56.015 fused_ordering(447) 00:17:56.015 fused_ordering(448) 00:17:56.015 fused_ordering(449) 00:17:56.015 fused_ordering(450) 00:17:56.015 fused_ordering(451) 00:17:56.015 fused_ordering(452) 00:17:56.015 fused_ordering(453) 00:17:56.015 fused_ordering(454) 00:17:56.015 fused_ordering(455) 00:17:56.015 fused_ordering(456) 00:17:56.015 fused_ordering(457) 00:17:56.015 fused_ordering(458) 00:17:56.015 fused_ordering(459) 00:17:56.015 fused_ordering(460) 00:17:56.015 fused_ordering(461) 00:17:56.015 fused_ordering(462) 00:17:56.015 fused_ordering(463) 00:17:56.015 fused_ordering(464) 00:17:56.015 fused_ordering(465) 00:17:56.015 fused_ordering(466) 00:17:56.015 fused_ordering(467) 00:17:56.015 fused_ordering(468) 00:17:56.015 fused_ordering(469) 00:17:56.015 fused_ordering(470) 00:17:56.015 fused_ordering(471) 00:17:56.015 fused_ordering(472) 00:17:56.015 fused_ordering(473) 00:17:56.015 fused_ordering(474) 00:17:56.015 fused_ordering(475) 00:17:56.015 fused_ordering(476) 00:17:56.015 fused_ordering(477) 00:17:56.015 fused_ordering(478) 00:17:56.015 fused_ordering(479) 00:17:56.015 fused_ordering(480) 00:17:56.015 fused_ordering(481) 00:17:56.015 fused_ordering(482) 00:17:56.015 fused_ordering(483) 00:17:56.015 fused_ordering(484) 00:17:56.015 fused_ordering(485) 00:17:56.015 fused_ordering(486) 00:17:56.015 fused_ordering(487) 00:17:56.015 fused_ordering(488) 00:17:56.015 fused_ordering(489) 00:17:56.015 fused_ordering(490) 00:17:56.015 fused_ordering(491) 00:17:56.015 fused_ordering(492) 00:17:56.015 fused_ordering(493) 00:17:56.015 fused_ordering(494) 00:17:56.015 fused_ordering(495) 00:17:56.015 fused_ordering(496) 00:17:56.015 fused_ordering(497) 00:17:56.015 fused_ordering(498) 00:17:56.015 fused_ordering(499) 00:17:56.015 fused_ordering(500) 00:17:56.015 fused_ordering(501) 00:17:56.015 fused_ordering(502) 00:17:56.015 fused_ordering(503) 00:17:56.015 fused_ordering(504) 00:17:56.015 fused_ordering(505) 00:17:56.015 fused_ordering(506) 00:17:56.015 fused_ordering(507) 00:17:56.015 fused_ordering(508) 00:17:56.015 fused_ordering(509) 00:17:56.015 fused_ordering(510) 00:17:56.015 fused_ordering(511) 00:17:56.015 fused_ordering(512) 00:17:56.015 fused_ordering(513) 00:17:56.015 fused_ordering(514) 00:17:56.015 fused_ordering(515) 00:17:56.015 fused_ordering(516) 00:17:56.015 fused_ordering(517) 00:17:56.015 fused_ordering(518) 00:17:56.015 fused_ordering(519) 00:17:56.015 fused_ordering(520) 00:17:56.015 fused_ordering(521) 00:17:56.015 fused_ordering(522) 00:17:56.015 fused_ordering(523) 00:17:56.015 fused_ordering(524) 00:17:56.015 fused_ordering(525) 00:17:56.015 fused_ordering(526) 00:17:56.015 fused_ordering(527) 00:17:56.015 fused_ordering(528) 00:17:56.015 fused_ordering(529) 00:17:56.015 fused_ordering(530) 00:17:56.016 fused_ordering(531) 00:17:56.016 fused_ordering(532) 00:17:56.016 fused_ordering(533) 00:17:56.016 fused_ordering(534) 00:17:56.016 fused_ordering(535) 00:17:56.016 fused_ordering(536) 00:17:56.016 fused_ordering(537) 00:17:56.016 fused_ordering(538) 00:17:56.016 fused_ordering(539) 00:17:56.016 fused_ordering(540) 00:17:56.016 fused_ordering(541) 00:17:56.016 fused_ordering(542) 00:17:56.016 fused_ordering(543) 00:17:56.016 fused_ordering(544) 00:17:56.016 fused_ordering(545) 00:17:56.016 fused_ordering(546) 00:17:56.016 fused_ordering(547) 00:17:56.016 fused_ordering(548) 00:17:56.016 fused_ordering(549) 00:17:56.016 fused_ordering(550) 00:17:56.016 fused_ordering(551) 00:17:56.016 fused_ordering(552) 00:17:56.016 fused_ordering(553) 00:17:56.016 fused_ordering(554) 00:17:56.016 fused_ordering(555) 00:17:56.016 fused_ordering(556) 00:17:56.016 fused_ordering(557) 00:17:56.016 fused_ordering(558) 00:17:56.016 fused_ordering(559) 00:17:56.016 fused_ordering(560) 00:17:56.016 fused_ordering(561) 00:17:56.016 fused_ordering(562) 00:17:56.016 fused_ordering(563) 00:17:56.016 fused_ordering(564) 00:17:56.016 fused_ordering(565) 00:17:56.016 fused_ordering(566) 00:17:56.016 fused_ordering(567) 00:17:56.016 fused_ordering(568) 00:17:56.016 fused_ordering(569) 00:17:56.016 fused_ordering(570) 00:17:56.016 fused_ordering(571) 00:17:56.016 fused_ordering(572) 00:17:56.016 fused_ordering(573) 00:17:56.016 fused_ordering(574) 00:17:56.016 fused_ordering(575) 00:17:56.016 fused_ordering(576) 00:17:56.016 fused_ordering(577) 00:17:56.016 fused_ordering(578) 00:17:56.016 fused_ordering(579) 00:17:56.016 fused_ordering(580) 00:17:56.016 fused_ordering(581) 00:17:56.016 fused_ordering(582) 00:17:56.016 fused_ordering(583) 00:17:56.016 fused_ordering(584) 00:17:56.016 fused_ordering(585) 00:17:56.016 fused_ordering(586) 00:17:56.016 fused_ordering(587) 00:17:56.016 fused_ordering(588) 00:17:56.016 fused_ordering(589) 00:17:56.016 fused_ordering(590) 00:17:56.016 fused_ordering(591) 00:17:56.016 fused_ordering(592) 00:17:56.016 fused_ordering(593) 00:17:56.016 fused_ordering(594) 00:17:56.016 fused_ordering(595) 00:17:56.016 fused_ordering(596) 00:17:56.016 fused_ordering(597) 00:17:56.016 fused_ordering(598) 00:17:56.016 fused_ordering(599) 00:17:56.016 fused_ordering(600) 00:17:56.016 fused_ordering(601) 00:17:56.016 fused_ordering(602) 00:17:56.016 fused_ordering(603) 00:17:56.016 fused_ordering(604) 00:17:56.016 fused_ordering(605) 00:17:56.016 fused_ordering(606) 00:17:56.016 fused_ordering(607) 00:17:56.016 fused_ordering(608) 00:17:56.016 fused_ordering(609) 00:17:56.016 fused_ordering(610) 00:17:56.016 fused_ordering(611) 00:17:56.016 fused_ordering(612) 00:17:56.016 fused_ordering(613) 00:17:56.016 fused_ordering(614) 00:17:56.016 fused_ordering(615) 00:17:56.309 fused_ordering(616) 00:17:56.309 fused_ordering(617) 00:17:56.309 fused_ordering(618) 00:17:56.309 fused_ordering(619) 00:17:56.309 fused_ordering(620) 00:17:56.309 fused_ordering(621) 00:17:56.309 fused_ordering(622) 00:17:56.309 fused_ordering(623) 00:17:56.309 fused_ordering(624) 00:17:56.309 fused_ordering(625) 00:17:56.309 fused_ordering(626) 00:17:56.309 fused_ordering(627) 00:17:56.309 fused_ordering(628) 00:17:56.309 fused_ordering(629) 00:17:56.309 fused_ordering(630) 00:17:56.309 fused_ordering(631) 00:17:56.309 fused_ordering(632) 00:17:56.309 fused_ordering(633) 00:17:56.309 fused_ordering(634) 00:17:56.309 fused_ordering(635) 00:17:56.309 fused_ordering(636) 00:17:56.309 fused_ordering(637) 00:17:56.309 fused_ordering(638) 00:17:56.309 fused_ordering(639) 00:17:56.309 fused_ordering(640) 00:17:56.309 fused_ordering(641) 00:17:56.309 fused_ordering(642) 00:17:56.309 fused_ordering(643) 00:17:56.309 fused_ordering(644) 00:17:56.309 fused_ordering(645) 00:17:56.309 fused_ordering(646) 00:17:56.309 fused_ordering(647) 00:17:56.309 fused_ordering(648) 00:17:56.309 fused_ordering(649) 00:17:56.309 fused_ordering(650) 00:17:56.309 fused_ordering(651) 00:17:56.309 fused_ordering(652) 00:17:56.309 fused_ordering(653) 00:17:56.309 fused_ordering(654) 00:17:56.309 fused_ordering(655) 00:17:56.309 fused_ordering(656) 00:17:56.309 fused_ordering(657) 00:17:56.309 fused_ordering(658) 00:17:56.309 fused_ordering(659) 00:17:56.309 fused_ordering(660) 00:17:56.309 fused_ordering(661) 00:17:56.309 fused_ordering(662) 00:17:56.309 fused_ordering(663) 00:17:56.309 fused_ordering(664) 00:17:56.309 fused_ordering(665) 00:17:56.309 fused_ordering(666) 00:17:56.309 fused_ordering(667) 00:17:56.309 fused_ordering(668) 00:17:56.309 fused_ordering(669) 00:17:56.309 fused_ordering(670) 00:17:56.309 fused_ordering(671) 00:17:56.309 fused_ordering(672) 00:17:56.309 fused_ordering(673) 00:17:56.309 fused_ordering(674) 00:17:56.309 fused_ordering(675) 00:17:56.309 fused_ordering(676) 00:17:56.309 fused_ordering(677) 00:17:56.309 fused_ordering(678) 00:17:56.309 fused_ordering(679) 00:17:56.309 fused_ordering(680) 00:17:56.309 fused_ordering(681) 00:17:56.309 fused_ordering(682) 00:17:56.309 fused_ordering(683) 00:17:56.309 fused_ordering(684) 00:17:56.309 fused_ordering(685) 00:17:56.309 fused_ordering(686) 00:17:56.309 fused_ordering(687) 00:17:56.309 fused_ordering(688) 00:17:56.309 fused_ordering(689) 00:17:56.309 fused_ordering(690) 00:17:56.309 fused_ordering(691) 00:17:56.309 fused_ordering(692) 00:17:56.309 fused_ordering(693) 00:17:56.309 fused_ordering(694) 00:17:56.309 fused_ordering(695) 00:17:56.309 fused_ordering(696) 00:17:56.309 fused_ordering(697) 00:17:56.309 fused_ordering(698) 00:17:56.309 fused_ordering(699) 00:17:56.309 fused_ordering(700) 00:17:56.309 fused_ordering(701) 00:17:56.309 fused_ordering(702) 00:17:56.309 fused_ordering(703) 00:17:56.309 fused_ordering(704) 00:17:56.309 fused_ordering(705) 00:17:56.309 fused_ordering(706) 00:17:56.309 fused_ordering(707) 00:17:56.309 fused_ordering(708) 00:17:56.309 fused_ordering(709) 00:17:56.309 fused_ordering(710) 00:17:56.309 fused_ordering(711) 00:17:56.309 fused_ordering(712) 00:17:56.309 fused_ordering(713) 00:17:56.309 fused_ordering(714) 00:17:56.309 fused_ordering(715) 00:17:56.309 fused_ordering(716) 00:17:56.309 fused_ordering(717) 00:17:56.309 fused_ordering(718) 00:17:56.309 fused_ordering(719) 00:17:56.309 fused_ordering(720) 00:17:56.309 fused_ordering(721) 00:17:56.309 fused_ordering(722) 00:17:56.309 fused_ordering(723) 00:17:56.309 fused_ordering(724) 00:17:56.309 fused_ordering(725) 00:17:56.309 fused_ordering(726) 00:17:56.309 fused_ordering(727) 00:17:56.309 fused_ordering(728) 00:17:56.309 fused_ordering(729) 00:17:56.309 fused_ordering(730) 00:17:56.309 fused_ordering(731) 00:17:56.309 fused_ordering(732) 00:17:56.309 fused_ordering(733) 00:17:56.309 fused_ordering(734) 00:17:56.309 fused_ordering(735) 00:17:56.309 fused_ordering(736) 00:17:56.309 fused_ordering(737) 00:17:56.309 fused_ordering(738) 00:17:56.309 fused_ordering(739) 00:17:56.309 fused_ordering(740) 00:17:56.309 fused_ordering(741) 00:17:56.309 fused_ordering(742) 00:17:56.309 fused_ordering(743) 00:17:56.309 fused_ordering(744) 00:17:56.309 fused_ordering(745) 00:17:56.309 fused_ordering(746) 00:17:56.309 fused_ordering(747) 00:17:56.309 fused_ordering(748) 00:17:56.309 fused_ordering(749) 00:17:56.309 fused_ordering(750) 00:17:56.309 fused_ordering(751) 00:17:56.309 fused_ordering(752) 00:17:56.309 fused_ordering(753) 00:17:56.309 fused_ordering(754) 00:17:56.309 fused_ordering(755) 00:17:56.309 fused_ordering(756) 00:17:56.309 fused_ordering(757) 00:17:56.309 fused_ordering(758) 00:17:56.309 fused_ordering(759) 00:17:56.309 fused_ordering(760) 00:17:56.309 fused_ordering(761) 00:17:56.309 fused_ordering(762) 00:17:56.309 fused_ordering(763) 00:17:56.309 fused_ordering(764) 00:17:56.309 fused_ordering(765) 00:17:56.309 fused_ordering(766) 00:17:56.309 fused_ordering(767) 00:17:56.309 fused_ordering(768) 00:17:56.309 fused_ordering(769) 00:17:56.309 fused_ordering(770) 00:17:56.309 fused_ordering(771) 00:17:56.309 fused_ordering(772) 00:17:56.309 fused_ordering(773) 00:17:56.309 fused_ordering(774) 00:17:56.309 fused_ordering(775) 00:17:56.309 fused_ordering(776) 00:17:56.309 fused_ordering(777) 00:17:56.309 fused_ordering(778) 00:17:56.309 fused_ordering(779) 00:17:56.309 fused_ordering(780) 00:17:56.309 fused_ordering(781) 00:17:56.309 fused_ordering(782) 00:17:56.309 fused_ordering(783) 00:17:56.309 fused_ordering(784) 00:17:56.309 fused_ordering(785) 00:17:56.309 fused_ordering(786) 00:17:56.309 fused_ordering(787) 00:17:56.309 fused_ordering(788) 00:17:56.309 fused_ordering(789) 00:17:56.309 fused_ordering(790) 00:17:56.309 fused_ordering(791) 00:17:56.309 fused_ordering(792) 00:17:56.309 fused_ordering(793) 00:17:56.309 fused_ordering(794) 00:17:56.309 fused_ordering(795) 00:17:56.309 fused_ordering(796) 00:17:56.309 fused_ordering(797) 00:17:56.309 fused_ordering(798) 00:17:56.309 fused_ordering(799) 00:17:56.309 fused_ordering(800) 00:17:56.309 fused_ordering(801) 00:17:56.309 fused_ordering(802) 00:17:56.309 fused_ordering(803) 00:17:56.309 fused_ordering(804) 00:17:56.309 fused_ordering(805) 00:17:56.309 fused_ordering(806) 00:17:56.309 fused_ordering(807) 00:17:56.309 fused_ordering(808) 00:17:56.309 fused_ordering(809) 00:17:56.309 fused_ordering(810) 00:17:56.309 fused_ordering(811) 00:17:56.309 fused_ordering(812) 00:17:56.309 fused_ordering(813) 00:17:56.309 fused_ordering(814) 00:17:56.309 fused_ordering(815) 00:17:56.309 fused_ordering(816) 00:17:56.309 fused_ordering(817) 00:17:56.309 fused_ordering(818) 00:17:56.309 fused_ordering(819) 00:17:56.309 fused_ordering(820) 00:17:56.568 fused_ordering(821) 00:17:56.568 fused_ordering(822) 00:17:56.568 fused_ordering(823) 00:17:56.568 fused_ordering(824) 00:17:56.568 fused_ordering(825) 00:17:56.568 fused_ordering(826) 00:17:56.568 fused_ordering(827) 00:17:56.568 fused_ordering(828) 00:17:56.568 fused_ordering(829) 00:17:56.568 fused_ordering(830) 00:17:56.568 fused_ordering(831) 00:17:56.568 fused_ordering(832) 00:17:56.568 fused_ordering(833) 00:17:56.568 fused_ordering(834) 00:17:56.568 fused_ordering(835) 00:17:56.568 fused_ordering(836) 00:17:56.568 fused_ordering(837) 00:17:56.568 fused_ordering(838) 00:17:56.568 fused_ordering(839) 00:17:56.568 fused_ordering(840) 00:17:56.568 fused_ordering(841) 00:17:56.568 fused_ordering(842) 00:17:56.568 fused_ordering(843) 00:17:56.568 fused_ordering(844) 00:17:56.568 fused_ordering(845) 00:17:56.568 fused_ordering(846) 00:17:56.568 fused_ordering(847) 00:17:56.568 fused_ordering(848) 00:17:56.568 fused_ordering(849) 00:17:56.568 fused_ordering(850) 00:17:56.568 fused_ordering(851) 00:17:56.568 fused_ordering(852) 00:17:56.568 fused_ordering(853) 00:17:56.568 fused_ordering(854) 00:17:56.568 fused_ordering(855) 00:17:56.568 fused_ordering(856) 00:17:56.568 fused_ordering(857) 00:17:56.568 fused_ordering(858) 00:17:56.568 fused_ordering(859) 00:17:56.568 fused_ordering(860) 00:17:56.568 fused_ordering(861) 00:17:56.568 fused_ordering(862) 00:17:56.568 fused_ordering(863) 00:17:56.568 fused_ordering(864) 00:17:56.568 fused_ordering(865) 00:17:56.568 fused_ordering(866) 00:17:56.568 fused_ordering(867) 00:17:56.568 fused_ordering(868) 00:17:56.568 fused_ordering(869) 00:17:56.568 fused_ordering(870) 00:17:56.568 fused_ordering(871) 00:17:56.568 fused_ordering(872) 00:17:56.568 fused_ordering(873) 00:17:56.568 fused_ordering(874) 00:17:56.568 fused_ordering(875) 00:17:56.568 fused_ordering(876) 00:17:56.568 fused_ordering(877) 00:17:56.568 fused_ordering(878) 00:17:56.568 fused_ordering(879) 00:17:56.568 fused_ordering(880) 00:17:56.568 fused_ordering(881) 00:17:56.568 fused_ordering(882) 00:17:56.568 fused_ordering(883) 00:17:56.568 fused_ordering(884) 00:17:56.568 fused_ordering(885) 00:17:56.568 fused_ordering(886) 00:17:56.568 fused_ordering(887) 00:17:56.568 fused_ordering(888) 00:17:56.568 fused_ordering(889) 00:17:56.568 fused_ordering(890) 00:17:56.568 fused_ordering(891) 00:17:56.568 fused_ordering(892) 00:17:56.568 fused_ordering(893) 00:17:56.568 fused_ordering(894) 00:17:56.568 fused_ordering(895) 00:17:56.568 fused_ordering(896) 00:17:56.568 fused_ordering(897) 00:17:56.568 fused_ordering(898) 00:17:56.568 fused_ordering(899) 00:17:56.568 fused_ordering(900) 00:17:56.568 fused_ordering(901) 00:17:56.568 fused_ordering(902) 00:17:56.568 fused_ordering(903) 00:17:56.568 fused_ordering(904) 00:17:56.568 fused_ordering(905) 00:17:56.568 fused_ordering(906) 00:17:56.568 fused_ordering(907) 00:17:56.568 fused_ordering(908) 00:17:56.568 fused_ordering(909) 00:17:56.568 fused_ordering(910) 00:17:56.568 fused_ordering(911) 00:17:56.568 fused_ordering(912) 00:17:56.568 fused_ordering(913) 00:17:56.568 fused_ordering(914) 00:17:56.568 fused_ordering(915) 00:17:56.568 fused_ordering(916) 00:17:56.568 fused_ordering(917) 00:17:56.568 fused_ordering(918) 00:17:56.568 fused_ordering(919) 00:17:56.568 fused_ordering(920) 00:17:56.568 fused_ordering(921) 00:17:56.568 fused_ordering(922) 00:17:56.568 fused_ordering(923) 00:17:56.568 fused_ordering(924) 00:17:56.568 fused_ordering(925) 00:17:56.568 fused_ordering(926) 00:17:56.568 fused_ordering(927) 00:17:56.568 fused_ordering(928) 00:17:56.568 fused_ordering(929) 00:17:56.568 fused_ordering(930) 00:17:56.568 fused_ordering(931) 00:17:56.568 fused_ordering(932) 00:17:56.568 fused_ordering(933) 00:17:56.568 fused_ordering(934) 00:17:56.568 fused_ordering(935) 00:17:56.568 fused_ordering(936) 00:17:56.568 fused_ordering(937) 00:17:56.568 fused_ordering(938) 00:17:56.568 fused_ordering(939) 00:17:56.568 fused_ordering(940) 00:17:56.568 fused_ordering(941) 00:17:56.568 fused_ordering(942) 00:17:56.568 fused_ordering(943) 00:17:56.568 fused_ordering(944) 00:17:56.568 fused_ordering(945) 00:17:56.568 fused_ordering(946) 00:17:56.568 fused_ordering(947) 00:17:56.568 fused_ordering(948) 00:17:56.568 fused_ordering(949) 00:17:56.568 fused_ordering(950) 00:17:56.568 fused_ordering(951) 00:17:56.568 fused_ordering(952) 00:17:56.568 fused_ordering(953) 00:17:56.568 fused_ordering(954) 00:17:56.568 fused_ordering(955) 00:17:56.568 fused_ordering(956) 00:17:56.568 fused_ordering(957) 00:17:56.568 fused_ordering(958) 00:17:56.568 fused_ordering(959) 00:17:56.568 fused_ordering(960) 00:17:56.568 fused_ordering(961) 00:17:56.568 fused_ordering(962) 00:17:56.568 fused_ordering(963) 00:17:56.568 fused_ordering(964) 00:17:56.568 fused_ordering(965) 00:17:56.568 fused_ordering(966) 00:17:56.568 fused_ordering(967) 00:17:56.568 fused_ordering(968) 00:17:56.568 fused_ordering(969) 00:17:56.568 fused_ordering(970) 00:17:56.568 fused_ordering(971) 00:17:56.568 fused_ordering(972) 00:17:56.568 fused_ordering(973) 00:17:56.568 fused_ordering(974) 00:17:56.568 fused_ordering(975) 00:17:56.569 fused_ordering(976) 00:17:56.569 fused_ordering(977) 00:17:56.569 fused_ordering(978) 00:17:56.569 fused_ordering(979) 00:17:56.569 fused_ordering(980) 00:17:56.569 fused_ordering(981) 00:17:56.569 fused_ordering(982) 00:17:56.569 fused_ordering(983) 00:17:56.569 fused_ordering(984) 00:17:56.569 fused_ordering(985) 00:17:56.569 fused_ordering(986) 00:17:56.569 fused_ordering(987) 00:17:56.569 fused_ordering(988) 00:17:56.569 fused_ordering(989) 00:17:56.569 fused_ordering(990) 00:17:56.569 fused_ordering(991) 00:17:56.569 fused_ordering(992) 00:17:56.569 fused_ordering(993) 00:17:56.569 fused_ordering(994) 00:17:56.569 fused_ordering(995) 00:17:56.569 fused_ordering(996) 00:17:56.569 fused_ordering(997) 00:17:56.569 fused_ordering(998) 00:17:56.569 fused_ordering(999) 00:17:56.569 fused_ordering(1000) 00:17:56.569 fused_ordering(1001) 00:17:56.569 fused_ordering(1002) 00:17:56.569 fused_ordering(1003) 00:17:56.569 fused_ordering(1004) 00:17:56.569 fused_ordering(1005) 00:17:56.569 fused_ordering(1006) 00:17:56.569 fused_ordering(1007) 00:17:56.569 fused_ordering(1008) 00:17:56.569 fused_ordering(1009) 00:17:56.569 fused_ordering(1010) 00:17:56.569 fused_ordering(1011) 00:17:56.569 fused_ordering(1012) 00:17:56.569 fused_ordering(1013) 00:17:56.569 fused_ordering(1014) 00:17:56.569 fused_ordering(1015) 00:17:56.569 fused_ordering(1016) 00:17:56.569 fused_ordering(1017) 00:17:56.569 fused_ordering(1018) 00:17:56.569 fused_ordering(1019) 00:17:56.569 fused_ordering(1020) 00:17:56.569 fused_ordering(1021) 00:17:56.569 fused_ordering(1022) 00:17:56.569 fused_ordering(1023) 00:17:56.569 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:56.569 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:56.569 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:56.569 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:56.827 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:56.827 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:56.827 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:56.827 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:56.827 rmmod nvme_tcp 00:17:56.827 rmmod nvme_fabrics 00:17:56.827 rmmod nvme_keyring 00:17:56.827 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:56.827 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:56.827 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:56.827 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 91666 ']' 00:17:56.827 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 91666 00:17:56.827 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 91666 ']' 00:17:56.827 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 91666 00:17:56.827 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:17:56.827 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:56.827 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91666 00:17:56.827 killing process with pid 91666 00:17:56.827 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:56.827 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:56.827 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91666' 00:17:56.827 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 91666 00:17:56.827 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 91666 00:17:56.827 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:56.827 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:56.827 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:56.827 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:56.827 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:17:56.827 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:56.827 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:17:56.827 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:56.827 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:56.827 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:56.827 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:56.827 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:57.086 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:57.086 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:57.086 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:57.086 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:57.086 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:57.086 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:57.086 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:57.086 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:57.086 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:57.086 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:57.086 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:57.086 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.086 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:57.086 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.086 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@300 -- # return 0 00:17:57.086 00:17:57.086 real 0m3.507s 00:17:57.086 user 0m4.149s 00:17:57.086 sys 0m0.882s 00:17:57.086 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:57.086 ************************************ 00:17:57.086 END TEST nvmf_fused_ordering 00:17:57.086 ************************************ 00:17:57.086 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:57.086 06:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:57.086 06:20:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:57.086 06:20:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:57.086 06:20:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:57.086 ************************************ 00:17:57.086 START TEST nvmf_ns_masking 00:17:57.086 ************************************ 00:17:57.086 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:57.346 * Looking for test storage... 00:17:57.346 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:57.346 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:57.346 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:17:57.346 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:57.346 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:57.346 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:57.346 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:57.346 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:57.346 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:57.346 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:57.346 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:57.346 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:57.346 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:57.346 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:57.346 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:57.346 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:57.346 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:57.346 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:57.346 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:57.346 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:57.346 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:57.346 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:57.346 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:57.346 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:57.346 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:57.346 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:57.346 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:57.346 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:57.346 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:57.346 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:57.346 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:57.346 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:57.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.347 --rc genhtml_branch_coverage=1 00:17:57.347 --rc genhtml_function_coverage=1 00:17:57.347 --rc genhtml_legend=1 00:17:57.347 --rc geninfo_all_blocks=1 00:17:57.347 --rc geninfo_unexecuted_blocks=1 00:17:57.347 00:17:57.347 ' 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:57.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.347 --rc genhtml_branch_coverage=1 00:17:57.347 --rc genhtml_function_coverage=1 00:17:57.347 --rc genhtml_legend=1 00:17:57.347 --rc geninfo_all_blocks=1 00:17:57.347 --rc geninfo_unexecuted_blocks=1 00:17:57.347 00:17:57.347 ' 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:57.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.347 --rc genhtml_branch_coverage=1 00:17:57.347 --rc genhtml_function_coverage=1 00:17:57.347 --rc genhtml_legend=1 00:17:57.347 --rc geninfo_all_blocks=1 00:17:57.347 --rc geninfo_unexecuted_blocks=1 00:17:57.347 00:17:57.347 ' 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:57.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.347 --rc genhtml_branch_coverage=1 00:17:57.347 --rc genhtml_function_coverage=1 00:17:57.347 --rc genhtml_legend=1 00:17:57.347 --rc geninfo_all_blocks=1 00:17:57.347 --rc geninfo_unexecuted_blocks=1 00:17:57.347 00:17:57.347 ' 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:57.347 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=6698b8a8-6bf5-41bc-adc8-00630f14232f 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=18c305fe-a853-4054-8098-9ea528bb8ccf 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=2603c8fc-ef83-415e-8000-c37b30d0f2e2 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:57.347 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:57.348 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:57.348 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:57.348 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:57.348 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:57.348 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:57.348 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:57.348 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:57.348 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:57.348 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:57.348 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:57.348 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:57.348 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:57.348 Cannot find device "nvmf_init_br" 00:17:57.348 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:17:57.348 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:57.348 Cannot find device "nvmf_init_br2" 00:17:57.348 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:17:57.348 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:57.348 Cannot find device "nvmf_tgt_br" 00:17:57.348 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # true 00:17:57.348 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:57.348 Cannot find device "nvmf_tgt_br2" 00:17:57.348 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # true 00:17:57.348 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:57.348 Cannot find device "nvmf_init_br" 00:17:57.348 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # true 00:17:57.348 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:57.348 Cannot find device "nvmf_init_br2" 00:17:57.348 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # true 00:17:57.348 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:57.348 Cannot find device "nvmf_tgt_br" 00:17:57.348 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # true 00:17:57.348 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:57.607 Cannot find device "nvmf_tgt_br2" 00:17:57.607 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # true 00:17:57.607 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:57.607 Cannot find device "nvmf_br" 00:17:57.607 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # true 00:17:57.607 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:57.607 Cannot find device "nvmf_init_if" 00:17:57.607 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # true 00:17:57.607 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:57.607 Cannot find device "nvmf_init_if2" 00:17:57.607 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # true 00:17:57.607 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:57.607 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:57.607 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # true 00:17:57.607 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:57.607 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:57.607 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # true 00:17:57.607 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:57.607 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:57.607 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:57.607 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:57.607 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:57.607 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:57.607 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:57.607 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:57.607 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:57.607 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:57.607 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:57.607 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:57.607 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:57.607 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:57.607 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:57.607 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:57.607 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:57.607 06:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:57.607 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:57.607 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:57.607 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:57.607 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:57.607 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:57.607 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:57.608 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:57.608 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:57.608 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:57.608 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:57.608 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:57.608 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:57.608 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:57.608 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:57.608 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:57.608 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:57.608 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:17:57.608 00:17:57.608 --- 10.0.0.3 ping statistics --- 00:17:57.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.608 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:17:57.608 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:57.608 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:57.608 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:17:57.608 00:17:57.608 --- 10.0.0.4 ping statistics --- 00:17:57.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.608 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:17:57.608 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:57.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:57.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:17:57.608 00:17:57.608 --- 10.0.0.1 ping statistics --- 00:17:57.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.608 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:17:57.608 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:57.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:57.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:17:57.608 00:17:57.608 --- 10.0.0.2 ping statistics --- 00:17:57.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.608 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:17:57.608 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:57.608 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@461 -- # return 0 00:17:57.608 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:57.608 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:57.608 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:57.608 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:57.608 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:57.608 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:57.608 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:57.608 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:57.608 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:57.608 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:57.608 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:57.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.608 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=91954 00:17:57.608 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 91954 00:17:57.608 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 91954 ']' 00:17:57.608 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.608 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:57.608 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.608 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:57.608 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:57.608 06:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:57.867 [2024-11-27 06:20:17.146361] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:17:57.867 [2024-11-27 06:20:17.146416] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.867 [2024-11-27 06:20:17.261363] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:57.867 [2024-11-27 06:20:17.287602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.867 [2024-11-27 06:20:17.305492] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:57.867 [2024-11-27 06:20:17.305527] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:57.867 [2024-11-27 06:20:17.305533] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:57.867 [2024-11-27 06:20:17.305538] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:57.867 [2024-11-27 06:20:17.305543] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:57.867 [2024-11-27 06:20:17.305790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.800 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:58.800 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:58.800 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:58.800 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:58.800 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:58.800 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.800 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:58.800 [2024-11-27 06:20:18.246332] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:58.800 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:58.800 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:58.800 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:59.058 Malloc1 00:17:59.058 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:59.317 Malloc2 00:17:59.317 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:59.576 06:20:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:59.576 06:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:59.833 [2024-11-27 06:20:19.264503] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:59.833 06:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:59.833 06:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2603c8fc-ef83-415e-8000-c37b30d0f2e2 -a 10.0.0.3 -s 4420 -i 4 00:18:00.091 06:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:00.091 06:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:00.091 06:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:00.091 06:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:00.091 06:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:01.990 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:01.990 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:01.990 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:01.990 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:01.990 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:01.990 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:01.990 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:01.990 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:01.990 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:01.990 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:01.990 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:01.990 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:01.990 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:01.990 [ 0]:0x1 00:18:01.990 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:01.991 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:01.991 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1841cf208f2d42ccbff693fe11b09838 00:18:01.991 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1841cf208f2d42ccbff693fe11b09838 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:01.991 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:02.249 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:02.249 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:02.249 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:02.249 [ 0]:0x1 00:18:02.249 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:02.249 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:02.249 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1841cf208f2d42ccbff693fe11b09838 00:18:02.249 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1841cf208f2d42ccbff693fe11b09838 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:02.249 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:02.249 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:02.249 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:02.249 [ 1]:0x2 00:18:02.249 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:02.249 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:02.249 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5723e51a82a24727b161e13389822295 00:18:02.249 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5723e51a82a24727b161e13389822295 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:02.249 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:02.249 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:02.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:02.507 06:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:02.507 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:02.765 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:02.765 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2603c8fc-ef83-415e-8000-c37b30d0f2e2 -a 10.0.0.3 -s 4420 -i 4 00:18:03.023 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:03.023 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:03.023 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:03.023 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:18:03.023 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:18:03.023 06:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:04.988 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:04.988 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:04.988 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:04.988 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:04.988 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:04.988 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:04.988 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:04.988 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:04.988 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:04.988 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:04.988 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:18:04.988 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:04.988 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:04.988 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:04.988 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:04.988 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:04.988 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:04.988 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:04.988 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:04.988 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:04.988 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:04.988 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:04.988 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:04.988 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:04.988 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:04.988 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:04.988 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:04.988 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:04.988 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:18:04.988 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:04.988 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:04.988 [ 0]:0x2 00:18:04.988 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:04.988 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:04.988 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5723e51a82a24727b161e13389822295 00:18:04.988 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5723e51a82a24727b161e13389822295 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:04.988 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:05.310 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:18:05.310 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:05.310 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:05.310 [ 0]:0x1 00:18:05.310 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:05.310 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:05.310 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1841cf208f2d42ccbff693fe11b09838 00:18:05.310 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1841cf208f2d42ccbff693fe11b09838 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:05.310 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:18:05.310 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:05.310 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:05.310 [ 1]:0x2 00:18:05.310 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:05.310 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:05.310 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5723e51a82a24727b161e13389822295 00:18:05.310 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5723e51a82a24727b161e13389822295 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:05.310 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:05.568 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:18:05.568 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:05.568 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:05.568 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:05.568 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:05.568 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:05.568 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:05.568 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:05.568 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:05.568 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:05.568 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:05.568 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:05.568 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:05.568 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:05.568 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:05.568 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:05.568 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:05.568 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:05.568 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:18:05.568 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:05.568 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:05.568 [ 0]:0x2 00:18:05.568 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:05.568 06:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:05.568 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5723e51a82a24727b161e13389822295 00:18:05.568 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5723e51a82a24727b161e13389822295 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:05.568 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:18:05.568 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:05.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:05.568 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:05.826 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:18:05.826 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2603c8fc-ef83-415e-8000-c37b30d0f2e2 -a 10.0.0.3 -s 4420 -i 4 00:18:06.090 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:06.090 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:06.090 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:06.090 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:06.090 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:06.090 06:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:07.989 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:07.989 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:07.989 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:07.989 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:07.989 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:07.989 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:07.989 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:07.989 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:07.989 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:07.989 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:07.989 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:07.989 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:07.989 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:07.989 [ 0]:0x1 00:18:07.989 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:07.989 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:07.989 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1841cf208f2d42ccbff693fe11b09838 00:18:07.990 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1841cf208f2d42ccbff693fe11b09838 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:07.990 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:07.990 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:07.990 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:07.990 [ 1]:0x2 00:18:07.990 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:07.990 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:07.990 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5723e51a82a24727b161e13389822295 00:18:07.990 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5723e51a82a24727b161e13389822295 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:07.990 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:08.248 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:08.248 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:08.248 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:08.248 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:08.248 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:08.248 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:08.248 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:08.248 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:08.248 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:08.248 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:08.248 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:08.248 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:08.248 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:08.248 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:08.248 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:08.248 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:08.248 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:08.248 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:08.248 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:08.248 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:08.248 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:08.248 [ 0]:0x2 00:18:08.248 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:08.248 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:08.507 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5723e51a82a24727b161e13389822295 00:18:08.507 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5723e51a82a24727b161e13389822295 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:08.507 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:08.507 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:08.507 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:08.507 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:08.507 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:08.507 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:08.507 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:08.507 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:08.507 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:08.507 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:08.507 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:08.507 06:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:08.507 [2024-11-27 06:20:27.996623] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:08.507 2024/11/27 06:20:28 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:18:08.507 request: 00:18:08.507 { 00:18:08.507 "method": "nvmf_ns_remove_host", 00:18:08.507 "params": { 00:18:08.507 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.507 "nsid": 2, 00:18:08.507 "host": "nqn.2016-06.io.spdk:host1" 00:18:08.507 } 00:18:08.507 } 00:18:08.507 Got JSON-RPC error response 00:18:08.507 GoRPCClient: error on JSON-RPC call 00:18:08.507 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:08.507 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:08.507 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:08.507 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:08.507 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:08.507 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:08.507 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:08.507 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:08.507 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:08.507 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:08.507 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:08.507 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:08.507 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:08.507 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:08.507 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:08.507 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:08.764 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:08.764 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:08.764 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:08.764 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:08.764 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:08.764 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:08.764 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:08.764 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:08.764 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:08.764 [ 0]:0x2 00:18:08.764 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:08.765 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:08.765 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5723e51a82a24727b161e13389822295 00:18:08.765 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5723e51a82a24727b161e13389822295 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:08.765 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:08.765 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:08.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:08.765 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=92320 00:18:08.765 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:08.765 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:08.765 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 92320 /var/tmp/host.sock 00:18:08.765 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 92320 ']' 00:18:08.765 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:08.765 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:08.765 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:08.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:08.765 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:08.765 06:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:08.765 [2024-11-27 06:20:28.192338] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:18:08.765 [2024-11-27 06:20:28.192418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92320 ] 00:18:09.022 [2024-11-27 06:20:28.314854] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:09.022 [2024-11-27 06:20:28.341571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.022 [2024-11-27 06:20:28.360719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:09.588 06:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.588 06:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:09.588 06:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:09.847 06:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:10.104 06:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 6698b8a8-6bf5-41bc-adc8-00630f14232f 00:18:10.104 06:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:10.104 06:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 6698B8A86BF541BCADC800630F14232F -i 00:18:10.362 06:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 18c305fe-a853-4054-8098-9ea528bb8ccf 00:18:10.362 06:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:10.362 06:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 18C305FEA853405480989EA528BB8CCF -i 00:18:10.620 06:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:10.620 06:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:10.877 06:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:10.877 06:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:11.134 nvme0n1 00:18:11.134 06:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:11.134 06:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:11.392 nvme1n2 00:18:11.392 06:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:11.392 06:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:11.392 06:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:11.392 06:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:11.392 06:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:11.649 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:11.649 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:11.649 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:11.649 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:11.906 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 6698b8a8-6bf5-41bc-adc8-00630f14232f == \6\6\9\8\b\8\a\8\-\6\b\f\5\-\4\1\b\c\-\a\d\c\8\-\0\0\6\3\0\f\1\4\2\3\2\f ]] 00:18:11.906 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:11.906 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:11.906 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:12.232 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 18c305fe-a853-4054-8098-9ea528bb8ccf == \1\8\c\3\0\5\f\e\-\a\8\5\3\-\4\0\5\4\-\8\0\9\8\-\9\e\a\5\2\8\b\b\8\c\c\f ]] 00:18:12.232 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:12.232 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:12.519 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 6698b8a8-6bf5-41bc-adc8-00630f14232f 00:18:12.519 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:12.519 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 6698B8A86BF541BCADC800630F14232F 00:18:12.519 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:12.519 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 6698B8A86BF541BCADC800630F14232F 00:18:12.519 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:12.519 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.519 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:12.519 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.519 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:12.519 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.519 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:12.519 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:12.519 06:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 6698B8A86BF541BCADC800630F14232F 00:18:12.777 [2024-11-27 06:20:32.101800] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:18:12.777 [2024-11-27 06:20:32.101833] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:18:12.777 [2024-11-27 06:20:32.101839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.777 2024/11/27 06:20:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:invalid hide_metadata:%!s(bool=false) nguid:6698B8A86BF541BCADC800630F14232F no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:12.777 request: 00:18:12.777 { 00:18:12.777 "method": "nvmf_subsystem_add_ns", 00:18:12.777 "params": { 00:18:12.777 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:12.777 "namespace": { 00:18:12.777 "bdev_name": "invalid", 00:18:12.777 "nsid": 1, 00:18:12.777 "nguid": "6698B8A86BF541BCADC800630F14232F", 00:18:12.777 "no_auto_visible": false, 00:18:12.777 "hide_metadata": false 00:18:12.777 } 00:18:12.777 } 00:18:12.777 } 00:18:12.777 Got JSON-RPC error response 00:18:12.777 GoRPCClient: error on JSON-RPC call 00:18:12.777 06:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:12.777 06:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:12.777 06:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:12.777 06:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:12.777 06:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 6698b8a8-6bf5-41bc-adc8-00630f14232f 00:18:12.777 06:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:12.777 06:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 6698B8A86BF541BCADC800630F14232F -i 00:18:13.034 06:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:18:14.933 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:18:14.933 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:18:14.933 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:15.192 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:18:15.192 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 92320 00:18:15.192 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 92320 ']' 00:18:15.192 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 92320 00:18:15.192 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:15.192 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:15.192 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92320 00:18:15.192 killing process with pid 92320 00:18:15.192 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:15.192 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:15.192 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92320' 00:18:15.192 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 92320 00:18:15.192 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 92320 00:18:15.450 06:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:15.708 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:15.708 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:18:15.708 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:15.708 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:18:15.967 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:15.967 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:18:15.967 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:15.967 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:15.967 rmmod nvme_tcp 00:18:15.967 rmmod nvme_fabrics 00:18:15.967 rmmod nvme_keyring 00:18:15.967 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:15.967 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:18:15.967 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:18:15.967 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 91954 ']' 00:18:15.967 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 91954 00:18:15.967 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 91954 ']' 00:18:15.967 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 91954 00:18:15.967 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:15.967 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:15.967 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91954 00:18:15.967 killing process with pid 91954 00:18:15.967 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:15.967 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:15.967 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91954' 00:18:15.967 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 91954 00:18:15.967 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 91954 00:18:15.967 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:15.967 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:15.967 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:15.967 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:18:15.967 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:18:15.967 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:15.967 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:18:15.967 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:15.967 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:15.967 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:16.225 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:16.225 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:16.225 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:16.225 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:16.225 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:16.225 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:16.225 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:16.225 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:16.225 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:16.225 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:16.225 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:16.225 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:16.225 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:16.225 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.225 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:16.225 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.225 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@300 -- # return 0 00:18:16.225 00:18:16.225 real 0m19.067s 00:18:16.225 user 0m31.155s 00:18:16.225 sys 0m2.292s 00:18:16.225 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:16.225 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:16.225 ************************************ 00:18:16.226 END TEST nvmf_ns_masking 00:18:16.226 ************************************ 00:18:16.226 06:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 0 -eq 1 ]] 00:18:16.226 06:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:18:16.226 06:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:16.226 06:20:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:16.226 06:20:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:16.226 06:20:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:16.226 ************************************ 00:18:16.226 START TEST nvmf_auth_target 00:18:16.226 ************************************ 00:18:16.226 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:16.485 * Looking for test storage... 00:18:16.485 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:16.485 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:16.485 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:18:16.485 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:16.485 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:16.485 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:16.485 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:16.485 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:16.485 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:16.485 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:16.485 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:16.485 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:16.485 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:16.485 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:16.485 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:16.485 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:16.485 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:18:16.485 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:18:16.485 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:16.485 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:16.485 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:18:16.485 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:18:16.485 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:16.485 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:16.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.486 --rc genhtml_branch_coverage=1 00:18:16.486 --rc genhtml_function_coverage=1 00:18:16.486 --rc genhtml_legend=1 00:18:16.486 --rc geninfo_all_blocks=1 00:18:16.486 --rc geninfo_unexecuted_blocks=1 00:18:16.486 00:18:16.486 ' 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:16.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.486 --rc genhtml_branch_coverage=1 00:18:16.486 --rc genhtml_function_coverage=1 00:18:16.486 --rc genhtml_legend=1 00:18:16.486 --rc geninfo_all_blocks=1 00:18:16.486 --rc geninfo_unexecuted_blocks=1 00:18:16.486 00:18:16.486 ' 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:16.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.486 --rc genhtml_branch_coverage=1 00:18:16.486 --rc genhtml_function_coverage=1 00:18:16.486 --rc genhtml_legend=1 00:18:16.486 --rc geninfo_all_blocks=1 00:18:16.486 --rc geninfo_unexecuted_blocks=1 00:18:16.486 00:18:16.486 ' 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:16.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.486 --rc genhtml_branch_coverage=1 00:18:16.486 --rc genhtml_function_coverage=1 00:18:16.486 --rc genhtml_legend=1 00:18:16.486 --rc geninfo_all_blocks=1 00:18:16.486 --rc geninfo_unexecuted_blocks=1 00:18:16.486 00:18:16.486 ' 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:16.486 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:16.487 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:16.487 Cannot find device "nvmf_init_br" 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:16.487 Cannot find device "nvmf_init_br2" 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:16.487 Cannot find device "nvmf_tgt_br" 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:16.487 Cannot find device "nvmf_tgt_br2" 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:16.487 Cannot find device "nvmf_init_br" 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:16.487 Cannot find device "nvmf_init_br2" 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:16.487 Cannot find device "nvmf_tgt_br" 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:16.487 Cannot find device "nvmf_tgt_br2" 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:16.487 Cannot find device "nvmf_br" 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:16.487 Cannot find device "nvmf_init_if" 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:16.487 Cannot find device "nvmf_init_if2" 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:16.487 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:16.487 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:16.487 06:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:16.487 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:16.488 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:16.488 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:16.746 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:16.746 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:18:16.746 00:18:16.746 --- 10.0.0.3 ping statistics --- 00:18:16.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.746 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:16.746 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:16.746 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.032 ms 00:18:16.746 00:18:16.746 --- 10.0.0.4 ping statistics --- 00:18:16.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.746 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:16.746 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:16.746 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:18:16.746 00:18:16.746 --- 10.0.0.1 ping statistics --- 00:18:16.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.746 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:16.746 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:16.746 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:18:16.746 00:18:16.746 --- 10.0.0.2 ping statistics --- 00:18:16.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.746 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=92815 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 92815 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 92815 ']' 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:16.746 06:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=92855 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=91e5e1114868f2c76d50a12a8cdd69272401315d3d1209e3 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.6yT 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 91e5e1114868f2c76d50a12a8cdd69272401315d3d1209e3 0 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 91e5e1114868f2c76d50a12a8cdd69272401315d3d1209e3 0 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=91e5e1114868f2c76d50a12a8cdd69272401315d3d1209e3 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.6yT 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.6yT 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.6yT 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=89016a79dab5ec46985e386adf341a73fc183b88766e637bbedafa014402816a 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Q0W 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 89016a79dab5ec46985e386adf341a73fc183b88766e637bbedafa014402816a 3 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 89016a79dab5ec46985e386adf341a73fc183b88766e637bbedafa014402816a 3 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=89016a79dab5ec46985e386adf341a73fc183b88766e637bbedafa014402816a 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Q0W 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Q0W 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.Q0W 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=15911898ede8a73faa38a91f4350bc03 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.mRD 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 15911898ede8a73faa38a91f4350bc03 1 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 15911898ede8a73faa38a91f4350bc03 1 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=15911898ede8a73faa38a91f4350bc03 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:18:17.680 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.mRD 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.mRD 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.mRD 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=72ab367677941dc1a3a17ab1e36f454af3be3f78934a9c1e 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.xcJ 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 72ab367677941dc1a3a17ab1e36f454af3be3f78934a9c1e 2 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 72ab367677941dc1a3a17ab1e36f454af3be3f78934a9c1e 2 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=72ab367677941dc1a3a17ab1e36f454af3be3f78934a9c1e 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.xcJ 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.xcJ 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.xcJ 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a2a9494c815edef4f352fb1636618c62601686f896f4291c 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.zdH 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a2a9494c815edef4f352fb1636618c62601686f896f4291c 2 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a2a9494c815edef4f352fb1636618c62601686f896f4291c 2 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a2a9494c815edef4f352fb1636618c62601686f896f4291c 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.zdH 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.zdH 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.zdH 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c91cf9f8278890c628c898fa0341a9a0 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.kH7 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c91cf9f8278890c628c898fa0341a9a0 1 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c91cf9f8278890c628c898fa0341a9a0 1 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c91cf9f8278890c628c898fa0341a9a0 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.kH7 00:18:17.939 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.kH7 00:18:17.940 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.kH7 00:18:17.940 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:18:17.940 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:17.940 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:17.940 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:17.940 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:18:17.940 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:18:17.940 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:17.940 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a7c7c545200ac4f01f40006f96fc6f06d1b0cc6eeb175a54a08db9d886d12b6a 00:18:17.940 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:17.940 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.M6F 00:18:17.940 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a7c7c545200ac4f01f40006f96fc6f06d1b0cc6eeb175a54a08db9d886d12b6a 3 00:18:17.940 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a7c7c545200ac4f01f40006f96fc6f06d1b0cc6eeb175a54a08db9d886d12b6a 3 00:18:17.940 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:17.940 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:17.940 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a7c7c545200ac4f01f40006f96fc6f06d1b0cc6eeb175a54a08db9d886d12b6a 00:18:17.940 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:18:17.940 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:17.940 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.M6F 00:18:17.940 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.M6F 00:18:17.940 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.M6F 00:18:17.940 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:18:17.940 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 92815 00:18:17.940 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 92815 ']' 00:18:17.940 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.940 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:17.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.940 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.940 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:17.940 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.197 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:18.197 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:18.197 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 92855 /var/tmp/host.sock 00:18:18.198 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 92855 ']' 00:18:18.198 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:18.198 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:18.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:18.198 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:18.198 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:18.198 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.455 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:18.455 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:18.455 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:18:18.455 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.455 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.455 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.455 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:18.455 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.6yT 00:18:18.455 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.455 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.455 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.455 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.6yT 00:18:18.455 06:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.6yT 00:18:18.713 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.Q0W ]] 00:18:18.713 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Q0W 00:18:18.713 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.713 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.713 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.713 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Q0W 00:18:18.713 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Q0W 00:18:18.970 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:18.970 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.mRD 00:18:18.970 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.970 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.970 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.970 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.mRD 00:18:18.970 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.mRD 00:18:19.228 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.xcJ ]] 00:18:19.228 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.xcJ 00:18:19.228 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.228 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.228 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.228 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.xcJ 00:18:19.228 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.xcJ 00:18:19.485 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:19.485 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.zdH 00:18:19.485 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.485 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.485 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.485 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.zdH 00:18:19.485 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.zdH 00:18:19.485 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.kH7 ]] 00:18:19.485 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kH7 00:18:19.485 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.485 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.485 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.485 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kH7 00:18:19.485 06:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kH7 00:18:19.743 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:19.743 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.M6F 00:18:19.743 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.743 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.743 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.743 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.M6F 00:18:19.743 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.M6F 00:18:20.000 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:18:20.000 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:20.000 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:20.000 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:20.000 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:20.000 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:20.258 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:18:20.258 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.258 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:20.258 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:20.258 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:20.258 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.258 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.258 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.258 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.258 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.258 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.258 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.258 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.516 00:18:20.516 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.516 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.516 06:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.773 06:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.773 06:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.773 06:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.773 06:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.773 06:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.773 06:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.773 { 00:18:20.773 "auth": { 00:18:20.773 "dhgroup": "null", 00:18:20.773 "digest": "sha256", 00:18:20.773 "state": "completed" 00:18:20.773 }, 00:18:20.773 "cntlid": 1, 00:18:20.773 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:18:20.773 "listen_address": { 00:18:20.773 "adrfam": "IPv4", 00:18:20.773 "traddr": "10.0.0.3", 00:18:20.773 "trsvcid": "4420", 00:18:20.773 "trtype": "TCP" 00:18:20.773 }, 00:18:20.773 "peer_address": { 00:18:20.773 "adrfam": "IPv4", 00:18:20.773 "traddr": "10.0.0.1", 00:18:20.773 "trsvcid": "33074", 00:18:20.773 "trtype": "TCP" 00:18:20.773 }, 00:18:20.773 "qid": 0, 00:18:20.773 "state": "enabled", 00:18:20.773 "thread": "nvmf_tgt_poll_group_000" 00:18:20.773 } 00:18:20.773 ]' 00:18:20.773 06:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.774 06:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:20.774 06:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.774 06:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:20.774 06:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.774 06:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.774 06:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.774 06:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.031 06:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTFlNWUxMTE0ODY4ZjJjNzZkNTBhMTJhOGNkZDY5MjcyNDAxMzE1ZDNkMTIwOWUzll6uxg==: --dhchap-ctrl-secret DHHC-1:03:ODkwMTZhNzlkYWI1ZWM0Njk4NWUzODZhZGYzNDFhNzNmYzE4M2I4ODc2NmU2MzdiYmVkYWZhMDE0NDAyODE2YVEi0zs=: 00:18:21.031 06:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:00:OTFlNWUxMTE0ODY4ZjJjNzZkNTBhMTJhOGNkZDY5MjcyNDAxMzE1ZDNkMTIwOWUzll6uxg==: --dhchap-ctrl-secret DHHC-1:03:ODkwMTZhNzlkYWI1ZWM0Njk4NWUzODZhZGYzNDFhNzNmYzE4M2I4ODc2NmU2MzdiYmVkYWZhMDE0NDAyODE2YVEi0zs=: 00:18:25.296 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.296 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:18:25.296 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.296 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.296 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.296 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:25.297 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:25.297 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:25.297 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:18:25.297 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:25.297 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:25.297 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:25.297 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:25.297 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.297 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.297 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.297 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.297 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.297 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.297 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.297 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.297 00:18:25.297 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:25.297 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:25.297 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.555 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.555 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.555 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.555 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.555 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.555 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.555 { 00:18:25.555 "auth": { 00:18:25.555 "dhgroup": "null", 00:18:25.555 "digest": "sha256", 00:18:25.555 "state": "completed" 00:18:25.555 }, 00:18:25.555 "cntlid": 3, 00:18:25.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:18:25.555 "listen_address": { 00:18:25.555 "adrfam": "IPv4", 00:18:25.555 "traddr": "10.0.0.3", 00:18:25.555 "trsvcid": "4420", 00:18:25.555 "trtype": "TCP" 00:18:25.555 }, 00:18:25.555 "peer_address": { 00:18:25.555 "adrfam": "IPv4", 00:18:25.555 "traddr": "10.0.0.1", 00:18:25.555 "trsvcid": "33114", 00:18:25.555 "trtype": "TCP" 00:18:25.555 }, 00:18:25.555 "qid": 0, 00:18:25.555 "state": "enabled", 00:18:25.555 "thread": "nvmf_tgt_poll_group_000" 00:18:25.555 } 00:18:25.555 ]' 00:18:25.555 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.555 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:25.555 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.555 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:25.555 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.555 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.555 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.555 06:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.813 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: --dhchap-ctrl-secret DHHC-1:02:NzJhYjM2NzY3Nzk0MWRjMWEzYTE3YWIxZTM2ZjQ1NGFmM2JlM2Y3ODkzNGE5YzFlpOeSKA==: 00:18:25.813 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: --dhchap-ctrl-secret DHHC-1:02:NzJhYjM2NzY3Nzk0MWRjMWEzYTE3YWIxZTM2ZjQ1NGFmM2JlM2Y3ODkzNGE5YzFlpOeSKA==: 00:18:26.383 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.383 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:18:26.384 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.384 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.384 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.384 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:26.384 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:26.384 06:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:26.644 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:18:26.644 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:26.644 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:26.644 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:26.644 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:26.644 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.644 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.644 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.644 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.644 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.644 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.644 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.644 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.904 00:18:26.904 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.904 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.904 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.164 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.164 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.164 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.164 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.164 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.164 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:27.164 { 00:18:27.164 "auth": { 00:18:27.164 "dhgroup": "null", 00:18:27.164 "digest": "sha256", 00:18:27.164 "state": "completed" 00:18:27.164 }, 00:18:27.164 "cntlid": 5, 00:18:27.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:18:27.164 "listen_address": { 00:18:27.164 "adrfam": "IPv4", 00:18:27.164 "traddr": "10.0.0.3", 00:18:27.164 "trsvcid": "4420", 00:18:27.164 "trtype": "TCP" 00:18:27.164 }, 00:18:27.164 "peer_address": { 00:18:27.164 "adrfam": "IPv4", 00:18:27.164 "traddr": "10.0.0.1", 00:18:27.164 "trsvcid": "33150", 00:18:27.164 "trtype": "TCP" 00:18:27.164 }, 00:18:27.164 "qid": 0, 00:18:27.164 "state": "enabled", 00:18:27.164 "thread": "nvmf_tgt_poll_group_000" 00:18:27.164 } 00:18:27.164 ]' 00:18:27.164 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:27.164 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:27.164 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:27.164 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:27.164 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:27.164 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.164 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.164 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.424 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: --dhchap-ctrl-secret DHHC-1:01:YzkxY2Y5ZjgyNzg4OTBjNjI4Yzg5OGZhMDM0MWE5YTDzzxRW: 00:18:27.424 06:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: --dhchap-ctrl-secret DHHC-1:01:YzkxY2Y5ZjgyNzg4OTBjNjI4Yzg5OGZhMDM0MWE5YTDzzxRW: 00:18:27.997 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.997 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:18:27.997 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.997 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.997 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.997 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:27.997 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:27.997 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:28.259 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:18:28.259 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:28.259 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:28.259 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:28.259 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:28.259 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.259 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key3 00:18:28.259 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.259 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.259 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.259 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:28.259 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:28.259 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:28.520 00:18:28.520 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.520 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.520 06:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.782 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.782 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.782 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.782 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.782 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.782 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:28.782 { 00:18:28.782 "auth": { 00:18:28.782 "dhgroup": "null", 00:18:28.782 "digest": "sha256", 00:18:28.782 "state": "completed" 00:18:28.782 }, 00:18:28.782 "cntlid": 7, 00:18:28.782 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:18:28.782 "listen_address": { 00:18:28.782 "adrfam": "IPv4", 00:18:28.782 "traddr": "10.0.0.3", 00:18:28.782 "trsvcid": "4420", 00:18:28.782 "trtype": "TCP" 00:18:28.782 }, 00:18:28.782 "peer_address": { 00:18:28.782 "adrfam": "IPv4", 00:18:28.782 "traddr": "10.0.0.1", 00:18:28.782 "trsvcid": "33184", 00:18:28.782 "trtype": "TCP" 00:18:28.782 }, 00:18:28.782 "qid": 0, 00:18:28.782 "state": "enabled", 00:18:28.782 "thread": "nvmf_tgt_poll_group_000" 00:18:28.782 } 00:18:28.782 ]' 00:18:28.782 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.782 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:28.782 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.782 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:28.782 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.782 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.782 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.782 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.044 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:18:29.044 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:18:29.617 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.617 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:18:29.617 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.617 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.617 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.617 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:29.617 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:29.617 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:29.617 06:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:29.888 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:18:29.888 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:29.888 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:29.888 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:29.888 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:29.888 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.888 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.888 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.888 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.888 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.888 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.889 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.889 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.155 00:18:30.155 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:30.155 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.155 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:30.416 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.416 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.416 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.416 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.416 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.416 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:30.417 { 00:18:30.417 "auth": { 00:18:30.417 "dhgroup": "ffdhe2048", 00:18:30.417 "digest": "sha256", 00:18:30.417 "state": "completed" 00:18:30.417 }, 00:18:30.417 "cntlid": 9, 00:18:30.417 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:18:30.417 "listen_address": { 00:18:30.417 "adrfam": "IPv4", 00:18:30.417 "traddr": "10.0.0.3", 00:18:30.417 "trsvcid": "4420", 00:18:30.417 "trtype": "TCP" 00:18:30.417 }, 00:18:30.417 "peer_address": { 00:18:30.417 "adrfam": "IPv4", 00:18:30.417 "traddr": "10.0.0.1", 00:18:30.417 "trsvcid": "39466", 00:18:30.417 "trtype": "TCP" 00:18:30.417 }, 00:18:30.417 "qid": 0, 00:18:30.417 "state": "enabled", 00:18:30.417 "thread": "nvmf_tgt_poll_group_000" 00:18:30.417 } 00:18:30.417 ]' 00:18:30.417 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:30.417 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:30.417 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.417 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:30.417 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.417 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.417 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.417 06:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.677 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTFlNWUxMTE0ODY4ZjJjNzZkNTBhMTJhOGNkZDY5MjcyNDAxMzE1ZDNkMTIwOWUzll6uxg==: --dhchap-ctrl-secret DHHC-1:03:ODkwMTZhNzlkYWI1ZWM0Njk4NWUzODZhZGYzNDFhNzNmYzE4M2I4ODc2NmU2MzdiYmVkYWZhMDE0NDAyODE2YVEi0zs=: 00:18:30.677 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:00:OTFlNWUxMTE0ODY4ZjJjNzZkNTBhMTJhOGNkZDY5MjcyNDAxMzE1ZDNkMTIwOWUzll6uxg==: --dhchap-ctrl-secret DHHC-1:03:ODkwMTZhNzlkYWI1ZWM0Njk4NWUzODZhZGYzNDFhNzNmYzE4M2I4ODc2NmU2MzdiYmVkYWZhMDE0NDAyODE2YVEi0zs=: 00:18:31.243 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.243 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:18:31.243 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.243 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.243 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.243 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:31.243 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:31.243 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:31.501 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:18:31.501 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:31.501 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:31.501 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:31.501 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:31.501 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.501 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.501 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.501 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.501 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.501 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.501 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.501 06:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.759 00:18:31.759 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:31.759 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:31.759 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.017 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.017 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.017 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.017 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.017 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.017 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:32.017 { 00:18:32.017 "auth": { 00:18:32.017 "dhgroup": "ffdhe2048", 00:18:32.017 "digest": "sha256", 00:18:32.017 "state": "completed" 00:18:32.017 }, 00:18:32.017 "cntlid": 11, 00:18:32.017 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:18:32.017 "listen_address": { 00:18:32.017 "adrfam": "IPv4", 00:18:32.017 "traddr": "10.0.0.3", 00:18:32.017 "trsvcid": "4420", 00:18:32.017 "trtype": "TCP" 00:18:32.017 }, 00:18:32.017 "peer_address": { 00:18:32.017 "adrfam": "IPv4", 00:18:32.017 "traddr": "10.0.0.1", 00:18:32.017 "trsvcid": "39510", 00:18:32.017 "trtype": "TCP" 00:18:32.017 }, 00:18:32.017 "qid": 0, 00:18:32.017 "state": "enabled", 00:18:32.017 "thread": "nvmf_tgt_poll_group_000" 00:18:32.017 } 00:18:32.017 ]' 00:18:32.017 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:32.017 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:32.017 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:32.017 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:32.017 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:32.017 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.017 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.017 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.274 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: --dhchap-ctrl-secret DHHC-1:02:NzJhYjM2NzY3Nzk0MWRjMWEzYTE3YWIxZTM2ZjQ1NGFmM2JlM2Y3ODkzNGE5YzFlpOeSKA==: 00:18:32.274 06:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: --dhchap-ctrl-secret DHHC-1:02:NzJhYjM2NzY3Nzk0MWRjMWEzYTE3YWIxZTM2ZjQ1NGFmM2JlM2Y3ODkzNGE5YzFlpOeSKA==: 00:18:32.839 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.839 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:18:32.839 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.839 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.839 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.839 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:32.839 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:32.839 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:33.097 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:18:33.097 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:33.097 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:33.097 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:33.097 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:33.097 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.097 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.097 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.097 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.097 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.097 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.097 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.097 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.356 00:18:33.356 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.356 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:33.356 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.614 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.614 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.614 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.614 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.614 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.614 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:33.614 { 00:18:33.614 "auth": { 00:18:33.614 "dhgroup": "ffdhe2048", 00:18:33.614 "digest": "sha256", 00:18:33.614 "state": "completed" 00:18:33.614 }, 00:18:33.614 "cntlid": 13, 00:18:33.614 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:18:33.614 "listen_address": { 00:18:33.614 "adrfam": "IPv4", 00:18:33.614 "traddr": "10.0.0.3", 00:18:33.614 "trsvcid": "4420", 00:18:33.614 "trtype": "TCP" 00:18:33.614 }, 00:18:33.614 "peer_address": { 00:18:33.614 "adrfam": "IPv4", 00:18:33.614 "traddr": "10.0.0.1", 00:18:33.614 "trsvcid": "39538", 00:18:33.614 "trtype": "TCP" 00:18:33.614 }, 00:18:33.614 "qid": 0, 00:18:33.614 "state": "enabled", 00:18:33.614 "thread": "nvmf_tgt_poll_group_000" 00:18:33.614 } 00:18:33.614 ]' 00:18:33.614 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:33.614 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:33.614 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:33.614 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:33.614 06:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:33.614 06:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.614 06:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.614 06:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.871 06:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: --dhchap-ctrl-secret DHHC-1:01:YzkxY2Y5ZjgyNzg4OTBjNjI4Yzg5OGZhMDM0MWE5YTDzzxRW: 00:18:33.871 06:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: --dhchap-ctrl-secret DHHC-1:01:YzkxY2Y5ZjgyNzg4OTBjNjI4Yzg5OGZhMDM0MWE5YTDzzxRW: 00:18:34.530 06:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.530 06:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:18:34.530 06:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.530 06:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.530 06:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.530 06:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:34.530 06:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:34.531 06:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:34.531 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:18:34.531 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.531 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:34.531 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:34.531 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:34.531 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.531 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key3 00:18:34.531 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.531 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.531 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.531 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:34.531 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:34.531 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:34.792 00:18:34.792 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:34.792 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:34.792 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.050 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.050 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.050 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.050 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.050 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.050 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:35.050 { 00:18:35.050 "auth": { 00:18:35.050 "dhgroup": "ffdhe2048", 00:18:35.050 "digest": "sha256", 00:18:35.050 "state": "completed" 00:18:35.050 }, 00:18:35.050 "cntlid": 15, 00:18:35.050 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:18:35.050 "listen_address": { 00:18:35.050 "adrfam": "IPv4", 00:18:35.050 "traddr": "10.0.0.3", 00:18:35.050 "trsvcid": "4420", 00:18:35.050 "trtype": "TCP" 00:18:35.050 }, 00:18:35.050 "peer_address": { 00:18:35.050 "adrfam": "IPv4", 00:18:35.050 "traddr": "10.0.0.1", 00:18:35.050 "trsvcid": "39552", 00:18:35.050 "trtype": "TCP" 00:18:35.050 }, 00:18:35.050 "qid": 0, 00:18:35.050 "state": "enabled", 00:18:35.050 "thread": "nvmf_tgt_poll_group_000" 00:18:35.050 } 00:18:35.050 ]' 00:18:35.050 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:35.050 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:35.050 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:35.310 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:35.310 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:35.310 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.310 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.310 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.310 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:18:35.310 06:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:18:35.883 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.883 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:18:35.883 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.883 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.883 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.883 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:35.883 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:35.883 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:35.883 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:36.145 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:18:36.145 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.145 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:36.145 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:36.145 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:36.145 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.145 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.145 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.145 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.145 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.145 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.145 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.145 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.718 00:18:36.718 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.718 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.718 06:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.718 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.718 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.718 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.718 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.718 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.718 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.718 { 00:18:36.718 "auth": { 00:18:36.718 "dhgroup": "ffdhe3072", 00:18:36.718 "digest": "sha256", 00:18:36.718 "state": "completed" 00:18:36.718 }, 00:18:36.718 "cntlid": 17, 00:18:36.718 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:18:36.718 "listen_address": { 00:18:36.718 "adrfam": "IPv4", 00:18:36.718 "traddr": "10.0.0.3", 00:18:36.718 "trsvcid": "4420", 00:18:36.718 "trtype": "TCP" 00:18:36.718 }, 00:18:36.718 "peer_address": { 00:18:36.718 "adrfam": "IPv4", 00:18:36.718 "traddr": "10.0.0.1", 00:18:36.718 "trsvcid": "39582", 00:18:36.718 "trtype": "TCP" 00:18:36.718 }, 00:18:36.718 "qid": 0, 00:18:36.718 "state": "enabled", 00:18:36.718 "thread": "nvmf_tgt_poll_group_000" 00:18:36.718 } 00:18:36.718 ]' 00:18:36.718 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:36.718 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:36.718 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:36.718 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:36.718 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:36.979 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.979 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.979 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.979 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTFlNWUxMTE0ODY4ZjJjNzZkNTBhMTJhOGNkZDY5MjcyNDAxMzE1ZDNkMTIwOWUzll6uxg==: --dhchap-ctrl-secret DHHC-1:03:ODkwMTZhNzlkYWI1ZWM0Njk4NWUzODZhZGYzNDFhNzNmYzE4M2I4ODc2NmU2MzdiYmVkYWZhMDE0NDAyODE2YVEi0zs=: 00:18:36.979 06:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:00:OTFlNWUxMTE0ODY4ZjJjNzZkNTBhMTJhOGNkZDY5MjcyNDAxMzE1ZDNkMTIwOWUzll6uxg==: --dhchap-ctrl-secret DHHC-1:03:ODkwMTZhNzlkYWI1ZWM0Njk4NWUzODZhZGYzNDFhNzNmYzE4M2I4ODc2NmU2MzdiYmVkYWZhMDE0NDAyODE2YVEi0zs=: 00:18:37.551 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.551 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:18:37.551 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.551 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.551 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.551 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:37.551 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:37.551 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:37.813 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:18:37.813 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:37.813 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:37.813 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:37.813 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:37.813 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.813 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.813 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.813 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.813 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.813 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.813 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.813 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.073 00:18:38.073 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:38.073 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:38.073 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.334 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.334 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.334 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.334 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.334 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.334 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:38.334 { 00:18:38.334 "auth": { 00:18:38.334 "dhgroup": "ffdhe3072", 00:18:38.334 "digest": "sha256", 00:18:38.334 "state": "completed" 00:18:38.334 }, 00:18:38.334 "cntlid": 19, 00:18:38.334 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:18:38.334 "listen_address": { 00:18:38.334 "adrfam": "IPv4", 00:18:38.334 "traddr": "10.0.0.3", 00:18:38.334 "trsvcid": "4420", 00:18:38.334 "trtype": "TCP" 00:18:38.334 }, 00:18:38.334 "peer_address": { 00:18:38.334 "adrfam": "IPv4", 00:18:38.334 "traddr": "10.0.0.1", 00:18:38.334 "trsvcid": "39620", 00:18:38.334 "trtype": "TCP" 00:18:38.334 }, 00:18:38.334 "qid": 0, 00:18:38.334 "state": "enabled", 00:18:38.334 "thread": "nvmf_tgt_poll_group_000" 00:18:38.334 } 00:18:38.334 ]' 00:18:38.334 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:38.335 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:38.335 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:38.335 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:38.335 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:38.593 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.593 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.593 06:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.593 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: --dhchap-ctrl-secret DHHC-1:02:NzJhYjM2NzY3Nzk0MWRjMWEzYTE3YWIxZTM2ZjQ1NGFmM2JlM2Y3ODkzNGE5YzFlpOeSKA==: 00:18:38.593 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: --dhchap-ctrl-secret DHHC-1:02:NzJhYjM2NzY3Nzk0MWRjMWEzYTE3YWIxZTM2ZjQ1NGFmM2JlM2Y3ODkzNGE5YzFlpOeSKA==: 00:18:39.158 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.418 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:18:39.418 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.418 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.418 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.418 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:39.418 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:39.418 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:39.418 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:18:39.418 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.418 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:39.418 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:39.418 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:39.418 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.418 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.418 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.418 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.418 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.418 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.419 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.419 06:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.679 00:18:39.679 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:39.679 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.679 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:39.939 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.939 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.939 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.939 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.939 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.939 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:39.939 { 00:18:39.939 "auth": { 00:18:39.939 "dhgroup": "ffdhe3072", 00:18:39.939 "digest": "sha256", 00:18:39.939 "state": "completed" 00:18:39.939 }, 00:18:39.939 "cntlid": 21, 00:18:39.939 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:18:39.939 "listen_address": { 00:18:39.939 "adrfam": "IPv4", 00:18:39.939 "traddr": "10.0.0.3", 00:18:39.939 "trsvcid": "4420", 00:18:39.939 "trtype": "TCP" 00:18:39.939 }, 00:18:39.939 "peer_address": { 00:18:39.939 "adrfam": "IPv4", 00:18:39.939 "traddr": "10.0.0.1", 00:18:39.939 "trsvcid": "41710", 00:18:39.939 "trtype": "TCP" 00:18:39.939 }, 00:18:39.939 "qid": 0, 00:18:39.939 "state": "enabled", 00:18:39.939 "thread": "nvmf_tgt_poll_group_000" 00:18:39.939 } 00:18:39.939 ]' 00:18:39.939 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:39.939 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:39.939 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:40.200 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:40.200 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:40.200 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.200 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.200 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.200 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: --dhchap-ctrl-secret DHHC-1:01:YzkxY2Y5ZjgyNzg4OTBjNjI4Yzg5OGZhMDM0MWE5YTDzzxRW: 00:18:40.200 06:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: --dhchap-ctrl-secret DHHC-1:01:YzkxY2Y5ZjgyNzg4OTBjNjI4Yzg5OGZhMDM0MWE5YTDzzxRW: 00:18:41.139 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.139 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:18:41.139 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.139 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.139 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.139 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:41.139 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:41.139 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:41.139 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:18:41.139 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:41.139 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:41.139 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:41.139 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:41.139 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.139 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key3 00:18:41.139 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.140 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.140 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.140 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:41.140 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:41.140 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:41.397 00:18:41.397 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:41.397 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:41.397 06:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.656 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.656 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.656 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.656 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.656 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.656 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:41.656 { 00:18:41.656 "auth": { 00:18:41.656 "dhgroup": "ffdhe3072", 00:18:41.656 "digest": "sha256", 00:18:41.656 "state": "completed" 00:18:41.656 }, 00:18:41.656 "cntlid": 23, 00:18:41.656 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:18:41.656 "listen_address": { 00:18:41.656 "adrfam": "IPv4", 00:18:41.656 "traddr": "10.0.0.3", 00:18:41.656 "trsvcid": "4420", 00:18:41.656 "trtype": "TCP" 00:18:41.656 }, 00:18:41.656 "peer_address": { 00:18:41.656 "adrfam": "IPv4", 00:18:41.656 "traddr": "10.0.0.1", 00:18:41.656 "trsvcid": "41734", 00:18:41.656 "trtype": "TCP" 00:18:41.656 }, 00:18:41.656 "qid": 0, 00:18:41.656 "state": "enabled", 00:18:41.656 "thread": "nvmf_tgt_poll_group_000" 00:18:41.656 } 00:18:41.656 ]' 00:18:41.656 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:41.656 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:41.656 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:41.656 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:41.656 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:41.656 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.656 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.656 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.914 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:18:41.914 06:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:18:42.849 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.849 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:18:42.849 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.849 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.849 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.849 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:42.849 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:42.849 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:42.849 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:42.849 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:18:42.849 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:42.849 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:42.849 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:42.849 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:42.849 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.849 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.849 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.849 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.849 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.849 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.849 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.849 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.107 00:18:43.107 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:43.107 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.107 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:43.365 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.365 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.365 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.365 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.365 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.365 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.365 { 00:18:43.365 "auth": { 00:18:43.365 "dhgroup": "ffdhe4096", 00:18:43.365 "digest": "sha256", 00:18:43.365 "state": "completed" 00:18:43.365 }, 00:18:43.365 "cntlid": 25, 00:18:43.365 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:18:43.365 "listen_address": { 00:18:43.365 "adrfam": "IPv4", 00:18:43.365 "traddr": "10.0.0.3", 00:18:43.365 "trsvcid": "4420", 00:18:43.365 "trtype": "TCP" 00:18:43.365 }, 00:18:43.365 "peer_address": { 00:18:43.365 "adrfam": "IPv4", 00:18:43.365 "traddr": "10.0.0.1", 00:18:43.365 "trsvcid": "41760", 00:18:43.365 "trtype": "TCP" 00:18:43.365 }, 00:18:43.365 "qid": 0, 00:18:43.365 "state": "enabled", 00:18:43.365 "thread": "nvmf_tgt_poll_group_000" 00:18:43.365 } 00:18:43.365 ]' 00:18:43.365 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:43.365 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:43.365 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:43.365 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:43.365 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:43.365 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.365 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.365 06:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.624 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTFlNWUxMTE0ODY4ZjJjNzZkNTBhMTJhOGNkZDY5MjcyNDAxMzE1ZDNkMTIwOWUzll6uxg==: --dhchap-ctrl-secret DHHC-1:03:ODkwMTZhNzlkYWI1ZWM0Njk4NWUzODZhZGYzNDFhNzNmYzE4M2I4ODc2NmU2MzdiYmVkYWZhMDE0NDAyODE2YVEi0zs=: 00:18:43.624 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:00:OTFlNWUxMTE0ODY4ZjJjNzZkNTBhMTJhOGNkZDY5MjcyNDAxMzE1ZDNkMTIwOWUzll6uxg==: --dhchap-ctrl-secret DHHC-1:03:ODkwMTZhNzlkYWI1ZWM0Njk4NWUzODZhZGYzNDFhNzNmYzE4M2I4ODc2NmU2MzdiYmVkYWZhMDE0NDAyODE2YVEi0zs=: 00:18:44.190 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.190 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:18:44.190 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.190 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.190 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.190 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:44.190 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:44.190 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:44.447 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:18:44.447 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:44.447 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:44.447 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:44.447 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:44.447 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.448 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.448 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.448 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.448 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.448 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.448 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.448 06:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.020 00:18:45.020 06:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:45.020 06:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:45.020 06:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.020 06:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.020 06:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.020 06:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.020 06:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.020 06:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.020 06:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:45.020 { 00:18:45.020 "auth": { 00:18:45.020 "dhgroup": "ffdhe4096", 00:18:45.020 "digest": "sha256", 00:18:45.020 "state": "completed" 00:18:45.020 }, 00:18:45.020 "cntlid": 27, 00:18:45.020 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:18:45.020 "listen_address": { 00:18:45.020 "adrfam": "IPv4", 00:18:45.020 "traddr": "10.0.0.3", 00:18:45.020 "trsvcid": "4420", 00:18:45.020 "trtype": "TCP" 00:18:45.020 }, 00:18:45.020 "peer_address": { 00:18:45.020 "adrfam": "IPv4", 00:18:45.020 "traddr": "10.0.0.1", 00:18:45.020 "trsvcid": "41792", 00:18:45.020 "trtype": "TCP" 00:18:45.020 }, 00:18:45.020 "qid": 0, 00:18:45.020 "state": "enabled", 00:18:45.020 "thread": "nvmf_tgt_poll_group_000" 00:18:45.020 } 00:18:45.020 ]' 00:18:45.020 06:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:45.020 06:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:45.020 06:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:45.287 06:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:45.287 06:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:45.287 06:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.287 06:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.287 06:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.287 06:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: --dhchap-ctrl-secret DHHC-1:02:NzJhYjM2NzY3Nzk0MWRjMWEzYTE3YWIxZTM2ZjQ1NGFmM2JlM2Y3ODkzNGE5YzFlpOeSKA==: 00:18:45.287 06:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: --dhchap-ctrl-secret DHHC-1:02:NzJhYjM2NzY3Nzk0MWRjMWEzYTE3YWIxZTM2ZjQ1NGFmM2JlM2Y3ODkzNGE5YzFlpOeSKA==: 00:18:46.220 06:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.220 06:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:18:46.220 06:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.220 06:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.220 06:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.220 06:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:46.220 06:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:46.220 06:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:46.220 06:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:18:46.220 06:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:46.220 06:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:46.220 06:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:46.220 06:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:46.220 06:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.220 06:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.220 06:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.220 06:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.220 06:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.220 06:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.220 06:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.220 06:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.786 00:18:46.786 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:46.786 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:46.786 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.786 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.786 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.786 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.786 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.786 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.786 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:46.786 { 00:18:46.786 "auth": { 00:18:46.786 "dhgroup": "ffdhe4096", 00:18:46.786 "digest": "sha256", 00:18:46.786 "state": "completed" 00:18:46.786 }, 00:18:46.786 "cntlid": 29, 00:18:46.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:18:46.786 "listen_address": { 00:18:46.786 "adrfam": "IPv4", 00:18:46.786 "traddr": "10.0.0.3", 00:18:46.786 "trsvcid": "4420", 00:18:46.786 "trtype": "TCP" 00:18:46.786 }, 00:18:46.786 "peer_address": { 00:18:46.786 "adrfam": "IPv4", 00:18:46.786 "traddr": "10.0.0.1", 00:18:46.786 "trsvcid": "41822", 00:18:46.786 "trtype": "TCP" 00:18:46.786 }, 00:18:46.786 "qid": 0, 00:18:46.786 "state": "enabled", 00:18:46.786 "thread": "nvmf_tgt_poll_group_000" 00:18:46.786 } 00:18:46.786 ]' 00:18:46.786 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:46.786 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:46.786 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:47.044 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:47.044 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:47.044 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.044 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.044 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.044 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: --dhchap-ctrl-secret DHHC-1:01:YzkxY2Y5ZjgyNzg4OTBjNjI4Yzg5OGZhMDM0MWE5YTDzzxRW: 00:18:47.044 06:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: --dhchap-ctrl-secret DHHC-1:01:YzkxY2Y5ZjgyNzg4OTBjNjI4Yzg5OGZhMDM0MWE5YTDzzxRW: 00:18:47.975 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.975 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:18:47.975 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.975 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.975 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.975 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:47.975 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:47.975 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:47.975 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:18:47.975 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:47.975 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:47.975 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:47.975 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:47.975 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.975 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key3 00:18:47.975 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.975 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.975 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.975 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:47.975 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:47.975 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:48.232 00:18:48.489 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:48.489 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:48.489 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.489 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.489 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.489 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.489 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.489 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.489 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:48.489 { 00:18:48.489 "auth": { 00:18:48.489 "dhgroup": "ffdhe4096", 00:18:48.489 "digest": "sha256", 00:18:48.489 "state": "completed" 00:18:48.489 }, 00:18:48.489 "cntlid": 31, 00:18:48.489 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:18:48.489 "listen_address": { 00:18:48.489 "adrfam": "IPv4", 00:18:48.489 "traddr": "10.0.0.3", 00:18:48.489 "trsvcid": "4420", 00:18:48.489 "trtype": "TCP" 00:18:48.489 }, 00:18:48.489 "peer_address": { 00:18:48.489 "adrfam": "IPv4", 00:18:48.489 "traddr": "10.0.0.1", 00:18:48.489 "trsvcid": "41856", 00:18:48.489 "trtype": "TCP" 00:18:48.489 }, 00:18:48.489 "qid": 0, 00:18:48.489 "state": "enabled", 00:18:48.489 "thread": "nvmf_tgt_poll_group_000" 00:18:48.489 } 00:18:48.489 ]' 00:18:48.489 06:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:48.489 06:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:48.746 06:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:48.746 06:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:48.746 06:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:48.746 06:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.746 06:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.746 06:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.004 06:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:18:49.004 06:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:18:49.570 06:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.570 06:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:18:49.570 06:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.570 06:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.570 06:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.570 06:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:49.570 06:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:49.570 06:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:49.570 06:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:49.570 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:18:49.570 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:49.570 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:49.570 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:49.570 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:49.570 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.570 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.570 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.570 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.570 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.570 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.570 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.570 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.140 00:18:50.140 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:50.140 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.140 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:50.140 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.140 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.140 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.140 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.398 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.398 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:50.398 { 00:18:50.398 "auth": { 00:18:50.398 "dhgroup": "ffdhe6144", 00:18:50.398 "digest": "sha256", 00:18:50.398 "state": "completed" 00:18:50.398 }, 00:18:50.398 "cntlid": 33, 00:18:50.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:18:50.398 "listen_address": { 00:18:50.398 "adrfam": "IPv4", 00:18:50.398 "traddr": "10.0.0.3", 00:18:50.398 "trsvcid": "4420", 00:18:50.398 "trtype": "TCP" 00:18:50.398 }, 00:18:50.398 "peer_address": { 00:18:50.398 "adrfam": "IPv4", 00:18:50.398 "traddr": "10.0.0.1", 00:18:50.398 "trsvcid": "45912", 00:18:50.398 "trtype": "TCP" 00:18:50.398 }, 00:18:50.398 "qid": 0, 00:18:50.398 "state": "enabled", 00:18:50.398 "thread": "nvmf_tgt_poll_group_000" 00:18:50.398 } 00:18:50.398 ]' 00:18:50.398 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:50.398 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:50.398 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:50.398 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:50.398 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:50.398 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.398 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.398 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.656 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTFlNWUxMTE0ODY4ZjJjNzZkNTBhMTJhOGNkZDY5MjcyNDAxMzE1ZDNkMTIwOWUzll6uxg==: --dhchap-ctrl-secret DHHC-1:03:ODkwMTZhNzlkYWI1ZWM0Njk4NWUzODZhZGYzNDFhNzNmYzE4M2I4ODc2NmU2MzdiYmVkYWZhMDE0NDAyODE2YVEi0zs=: 00:18:50.656 06:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:00:OTFlNWUxMTE0ODY4ZjJjNzZkNTBhMTJhOGNkZDY5MjcyNDAxMzE1ZDNkMTIwOWUzll6uxg==: --dhchap-ctrl-secret DHHC-1:03:ODkwMTZhNzlkYWI1ZWM0Njk4NWUzODZhZGYzNDFhNzNmYzE4M2I4ODc2NmU2MzdiYmVkYWZhMDE0NDAyODE2YVEi0zs=: 00:18:51.221 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.221 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:18:51.221 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.221 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.221 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.221 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:51.221 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:51.221 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:51.479 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:18:51.479 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:51.479 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:51.479 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:51.479 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:51.479 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.479 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.479 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.479 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.479 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.479 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.479 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.479 06:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.737 00:18:51.737 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:51.737 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:51.737 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.995 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.995 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.995 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.995 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.995 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.995 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:51.995 { 00:18:51.995 "auth": { 00:18:51.995 "dhgroup": "ffdhe6144", 00:18:51.995 "digest": "sha256", 00:18:51.995 "state": "completed" 00:18:51.995 }, 00:18:51.995 "cntlid": 35, 00:18:51.995 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:18:51.995 "listen_address": { 00:18:51.995 "adrfam": "IPv4", 00:18:51.995 "traddr": "10.0.0.3", 00:18:51.995 "trsvcid": "4420", 00:18:51.995 "trtype": "TCP" 00:18:51.995 }, 00:18:51.995 "peer_address": { 00:18:51.995 "adrfam": "IPv4", 00:18:51.995 "traddr": "10.0.0.1", 00:18:51.995 "trsvcid": "45944", 00:18:51.995 "trtype": "TCP" 00:18:51.995 }, 00:18:51.995 "qid": 0, 00:18:51.995 "state": "enabled", 00:18:51.995 "thread": "nvmf_tgt_poll_group_000" 00:18:51.995 } 00:18:51.995 ]' 00:18:51.995 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:51.995 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:51.995 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:51.995 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:51.995 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:51.995 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.995 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.995 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.253 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: --dhchap-ctrl-secret DHHC-1:02:NzJhYjM2NzY3Nzk0MWRjMWEzYTE3YWIxZTM2ZjQ1NGFmM2JlM2Y3ODkzNGE5YzFlpOeSKA==: 00:18:52.253 06:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: --dhchap-ctrl-secret DHHC-1:02:NzJhYjM2NzY3Nzk0MWRjMWEzYTE3YWIxZTM2ZjQ1NGFmM2JlM2Y3ODkzNGE5YzFlpOeSKA==: 00:18:52.819 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.819 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:18:52.819 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.819 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.819 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.819 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:52.819 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:52.819 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:53.077 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:18:53.077 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:53.077 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:53.077 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:53.077 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:53.077 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.077 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.077 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.077 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.077 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.077 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.077 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.077 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.335 00:18:53.335 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:53.335 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:53.335 06:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.593 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.593 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.593 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.593 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.593 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.593 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:53.593 { 00:18:53.593 "auth": { 00:18:53.593 "dhgroup": "ffdhe6144", 00:18:53.593 "digest": "sha256", 00:18:53.593 "state": "completed" 00:18:53.593 }, 00:18:53.593 "cntlid": 37, 00:18:53.593 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:18:53.593 "listen_address": { 00:18:53.593 "adrfam": "IPv4", 00:18:53.593 "traddr": "10.0.0.3", 00:18:53.593 "trsvcid": "4420", 00:18:53.593 "trtype": "TCP" 00:18:53.593 }, 00:18:53.593 "peer_address": { 00:18:53.593 "adrfam": "IPv4", 00:18:53.593 "traddr": "10.0.0.1", 00:18:53.593 "trsvcid": "45956", 00:18:53.593 "trtype": "TCP" 00:18:53.593 }, 00:18:53.593 "qid": 0, 00:18:53.593 "state": "enabled", 00:18:53.593 "thread": "nvmf_tgt_poll_group_000" 00:18:53.593 } 00:18:53.593 ]' 00:18:53.593 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:53.593 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:53.593 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:53.593 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:53.593 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:53.851 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.851 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.851 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.851 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: --dhchap-ctrl-secret DHHC-1:01:YzkxY2Y5ZjgyNzg4OTBjNjI4Yzg5OGZhMDM0MWE5YTDzzxRW: 00:18:53.851 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: --dhchap-ctrl-secret DHHC-1:01:YzkxY2Y5ZjgyNzg4OTBjNjI4Yzg5OGZhMDM0MWE5YTDzzxRW: 00:18:54.482 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.482 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:18:54.482 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.482 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.482 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.482 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:54.482 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:54.482 06:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:54.740 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:18:54.740 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.740 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:54.740 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:54.740 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:54.740 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.740 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key3 00:18:54.740 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.740 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.740 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.740 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:54.740 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:54.740 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:55.309 00:18:55.309 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:55.309 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:55.309 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.309 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.309 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.309 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.309 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.309 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.309 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:55.309 { 00:18:55.309 "auth": { 00:18:55.309 "dhgroup": "ffdhe6144", 00:18:55.309 "digest": "sha256", 00:18:55.309 "state": "completed" 00:18:55.309 }, 00:18:55.309 "cntlid": 39, 00:18:55.309 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:18:55.309 "listen_address": { 00:18:55.309 "adrfam": "IPv4", 00:18:55.309 "traddr": "10.0.0.3", 00:18:55.309 "trsvcid": "4420", 00:18:55.309 "trtype": "TCP" 00:18:55.309 }, 00:18:55.309 "peer_address": { 00:18:55.309 "adrfam": "IPv4", 00:18:55.309 "traddr": "10.0.0.1", 00:18:55.309 "trsvcid": "45994", 00:18:55.309 "trtype": "TCP" 00:18:55.309 }, 00:18:55.309 "qid": 0, 00:18:55.309 "state": "enabled", 00:18:55.309 "thread": "nvmf_tgt_poll_group_000" 00:18:55.309 } 00:18:55.309 ]' 00:18:55.309 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:55.309 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:55.309 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:55.570 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:55.570 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:55.570 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.570 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.570 06:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.832 06:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:18:55.832 06:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:18:56.400 06:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.400 06:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:18:56.400 06:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.400 06:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.400 06:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.400 06:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:56.400 06:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:56.400 06:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:56.400 06:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:56.400 06:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:18:56.400 06:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:56.400 06:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:56.400 06:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:56.400 06:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:56.400 06:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.400 06:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.400 06:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.400 06:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.400 06:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.400 06:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.400 06:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.400 06:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.968 00:18:56.968 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:56.968 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:56.968 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.228 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.228 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.228 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.228 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.228 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.228 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:57.228 { 00:18:57.228 "auth": { 00:18:57.228 "dhgroup": "ffdhe8192", 00:18:57.228 "digest": "sha256", 00:18:57.228 "state": "completed" 00:18:57.228 }, 00:18:57.228 "cntlid": 41, 00:18:57.228 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:18:57.228 "listen_address": { 00:18:57.228 "adrfam": "IPv4", 00:18:57.228 "traddr": "10.0.0.3", 00:18:57.228 "trsvcid": "4420", 00:18:57.228 "trtype": "TCP" 00:18:57.228 }, 00:18:57.228 "peer_address": { 00:18:57.228 "adrfam": "IPv4", 00:18:57.228 "traddr": "10.0.0.1", 00:18:57.228 "trsvcid": "46028", 00:18:57.228 "trtype": "TCP" 00:18:57.228 }, 00:18:57.228 "qid": 0, 00:18:57.228 "state": "enabled", 00:18:57.228 "thread": "nvmf_tgt_poll_group_000" 00:18:57.228 } 00:18:57.228 ]' 00:18:57.228 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:57.228 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:57.228 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:57.486 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:57.486 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:57.486 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.486 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.486 06:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.486 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTFlNWUxMTE0ODY4ZjJjNzZkNTBhMTJhOGNkZDY5MjcyNDAxMzE1ZDNkMTIwOWUzll6uxg==: --dhchap-ctrl-secret DHHC-1:03:ODkwMTZhNzlkYWI1ZWM0Njk4NWUzODZhZGYzNDFhNzNmYzE4M2I4ODc2NmU2MzdiYmVkYWZhMDE0NDAyODE2YVEi0zs=: 00:18:57.486 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:00:OTFlNWUxMTE0ODY4ZjJjNzZkNTBhMTJhOGNkZDY5MjcyNDAxMzE1ZDNkMTIwOWUzll6uxg==: --dhchap-ctrl-secret DHHC-1:03:ODkwMTZhNzlkYWI1ZWM0Njk4NWUzODZhZGYzNDFhNzNmYzE4M2I4ODc2NmU2MzdiYmVkYWZhMDE0NDAyODE2YVEi0zs=: 00:18:58.051 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.308 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:18:58.308 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.308 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.308 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.308 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:58.308 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:58.308 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:58.309 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:18:58.309 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:58.309 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:58.309 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:58.309 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:58.309 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.309 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.309 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.309 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.309 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.309 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.309 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.309 06:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.875 00:18:58.875 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:58.875 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:58.875 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.134 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.134 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.134 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.134 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.134 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.134 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:59.134 { 00:18:59.134 "auth": { 00:18:59.134 "dhgroup": "ffdhe8192", 00:18:59.134 "digest": "sha256", 00:18:59.134 "state": "completed" 00:18:59.134 }, 00:18:59.134 "cntlid": 43, 00:18:59.134 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:18:59.134 "listen_address": { 00:18:59.134 "adrfam": "IPv4", 00:18:59.134 "traddr": "10.0.0.3", 00:18:59.134 "trsvcid": "4420", 00:18:59.134 "trtype": "TCP" 00:18:59.134 }, 00:18:59.134 "peer_address": { 00:18:59.134 "adrfam": "IPv4", 00:18:59.134 "traddr": "10.0.0.1", 00:18:59.134 "trsvcid": "46054", 00:18:59.134 "trtype": "TCP" 00:18:59.134 }, 00:18:59.134 "qid": 0, 00:18:59.134 "state": "enabled", 00:18:59.134 "thread": "nvmf_tgt_poll_group_000" 00:18:59.134 } 00:18:59.134 ]' 00:18:59.134 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:59.134 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:59.134 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:59.134 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:59.134 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:59.134 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.134 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.134 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.393 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: --dhchap-ctrl-secret DHHC-1:02:NzJhYjM2NzY3Nzk0MWRjMWEzYTE3YWIxZTM2ZjQ1NGFmM2JlM2Y3ODkzNGE5YzFlpOeSKA==: 00:18:59.393 06:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: --dhchap-ctrl-secret DHHC-1:02:NzJhYjM2NzY3Nzk0MWRjMWEzYTE3YWIxZTM2ZjQ1NGFmM2JlM2Y3ODkzNGE5YzFlpOeSKA==: 00:18:59.957 06:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.957 06:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:18:59.957 06:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.957 06:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.957 06:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.957 06:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:59.957 06:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:59.957 06:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:00.215 06:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:19:00.215 06:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:00.215 06:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:00.215 06:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:00.215 06:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:00.215 06:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.215 06:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.215 06:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.215 06:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.215 06:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.215 06:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.215 06:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.215 06:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.780 00:19:00.780 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:00.780 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:00.780 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.039 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.039 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.040 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.040 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.040 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.040 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:01.040 { 00:19:01.040 "auth": { 00:19:01.040 "dhgroup": "ffdhe8192", 00:19:01.040 "digest": "sha256", 00:19:01.040 "state": "completed" 00:19:01.040 }, 00:19:01.040 "cntlid": 45, 00:19:01.040 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:19:01.040 "listen_address": { 00:19:01.040 "adrfam": "IPv4", 00:19:01.040 "traddr": "10.0.0.3", 00:19:01.040 "trsvcid": "4420", 00:19:01.040 "trtype": "TCP" 00:19:01.040 }, 00:19:01.040 "peer_address": { 00:19:01.040 "adrfam": "IPv4", 00:19:01.040 "traddr": "10.0.0.1", 00:19:01.040 "trsvcid": "35378", 00:19:01.040 "trtype": "TCP" 00:19:01.040 }, 00:19:01.040 "qid": 0, 00:19:01.040 "state": "enabled", 00:19:01.040 "thread": "nvmf_tgt_poll_group_000" 00:19:01.040 } 00:19:01.040 ]' 00:19:01.040 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:01.040 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:01.040 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:01.040 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:01.040 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:01.040 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.040 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.040 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.298 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: --dhchap-ctrl-secret DHHC-1:01:YzkxY2Y5ZjgyNzg4OTBjNjI4Yzg5OGZhMDM0MWE5YTDzzxRW: 00:19:01.298 06:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: --dhchap-ctrl-secret DHHC-1:01:YzkxY2Y5ZjgyNzg4OTBjNjI4Yzg5OGZhMDM0MWE5YTDzzxRW: 00:19:02.233 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.233 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:19:02.233 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.233 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.233 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.233 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:02.233 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:02.233 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:02.233 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:19:02.233 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:02.233 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:02.233 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:02.233 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:02.233 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.233 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key3 00:19:02.233 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.233 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.233 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.233 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:02.233 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:02.233 06:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:02.798 00:19:02.798 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:02.798 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.798 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:03.057 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.057 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.057 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.057 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.057 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.057 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:03.057 { 00:19:03.057 "auth": { 00:19:03.057 "dhgroup": "ffdhe8192", 00:19:03.057 "digest": "sha256", 00:19:03.057 "state": "completed" 00:19:03.057 }, 00:19:03.057 "cntlid": 47, 00:19:03.057 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:19:03.057 "listen_address": { 00:19:03.057 "adrfam": "IPv4", 00:19:03.057 "traddr": "10.0.0.3", 00:19:03.057 "trsvcid": "4420", 00:19:03.057 "trtype": "TCP" 00:19:03.057 }, 00:19:03.057 "peer_address": { 00:19:03.057 "adrfam": "IPv4", 00:19:03.057 "traddr": "10.0.0.1", 00:19:03.057 "trsvcid": "35402", 00:19:03.057 "trtype": "TCP" 00:19:03.057 }, 00:19:03.057 "qid": 0, 00:19:03.057 "state": "enabled", 00:19:03.057 "thread": "nvmf_tgt_poll_group_000" 00:19:03.057 } 00:19:03.057 ]' 00:19:03.057 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:03.057 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:03.057 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:03.057 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:03.057 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:03.057 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.057 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.057 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.315 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:19:03.315 06:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:19:04.248 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.248 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:19:04.248 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.248 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.248 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.248 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:04.248 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:04.248 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:04.248 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:04.248 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:04.248 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:19:04.248 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:04.248 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:04.248 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:04.248 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:04.248 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.248 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.248 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.248 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.248 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.248 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.248 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.248 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.507 00:19:04.507 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:04.507 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.507 06:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:04.764 06:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.764 06:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.764 06:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.764 06:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.764 06:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.764 06:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.764 { 00:19:04.764 "auth": { 00:19:04.764 "dhgroup": "null", 00:19:04.764 "digest": "sha384", 00:19:04.764 "state": "completed" 00:19:04.764 }, 00:19:04.764 "cntlid": 49, 00:19:04.764 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:19:04.764 "listen_address": { 00:19:04.764 "adrfam": "IPv4", 00:19:04.764 "traddr": "10.0.0.3", 00:19:04.764 "trsvcid": "4420", 00:19:04.764 "trtype": "TCP" 00:19:04.764 }, 00:19:04.764 "peer_address": { 00:19:04.764 "adrfam": "IPv4", 00:19:04.764 "traddr": "10.0.0.1", 00:19:04.764 "trsvcid": "35436", 00:19:04.764 "trtype": "TCP" 00:19:04.764 }, 00:19:04.764 "qid": 0, 00:19:04.764 "state": "enabled", 00:19:04.764 "thread": "nvmf_tgt_poll_group_000" 00:19:04.764 } 00:19:04.764 ]' 00:19:04.764 06:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.764 06:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:04.764 06:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.764 06:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:04.764 06:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.764 06:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.764 06:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.764 06:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.022 06:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTFlNWUxMTE0ODY4ZjJjNzZkNTBhMTJhOGNkZDY5MjcyNDAxMzE1ZDNkMTIwOWUzll6uxg==: --dhchap-ctrl-secret DHHC-1:03:ODkwMTZhNzlkYWI1ZWM0Njk4NWUzODZhZGYzNDFhNzNmYzE4M2I4ODc2NmU2MzdiYmVkYWZhMDE0NDAyODE2YVEi0zs=: 00:19:05.022 06:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:00:OTFlNWUxMTE0ODY4ZjJjNzZkNTBhMTJhOGNkZDY5MjcyNDAxMzE1ZDNkMTIwOWUzll6uxg==: --dhchap-ctrl-secret DHHC-1:03:ODkwMTZhNzlkYWI1ZWM0Njk4NWUzODZhZGYzNDFhNzNmYzE4M2I4ODc2NmU2MzdiYmVkYWZhMDE0NDAyODE2YVEi0zs=: 00:19:05.955 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.955 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:19:05.955 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.955 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.955 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.955 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.955 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:05.955 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:05.955 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:19:05.955 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:05.955 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:05.955 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:05.955 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:05.955 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.955 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.955 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.955 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.955 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.955 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.955 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.955 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.214 00:19:06.214 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.214 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.214 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.472 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.472 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.472 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.472 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.472 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.472 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.472 { 00:19:06.472 "auth": { 00:19:06.472 "dhgroup": "null", 00:19:06.472 "digest": "sha384", 00:19:06.472 "state": "completed" 00:19:06.472 }, 00:19:06.472 "cntlid": 51, 00:19:06.472 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:19:06.472 "listen_address": { 00:19:06.472 "adrfam": "IPv4", 00:19:06.472 "traddr": "10.0.0.3", 00:19:06.472 "trsvcid": "4420", 00:19:06.472 "trtype": "TCP" 00:19:06.472 }, 00:19:06.472 "peer_address": { 00:19:06.472 "adrfam": "IPv4", 00:19:06.472 "traddr": "10.0.0.1", 00:19:06.472 "trsvcid": "35452", 00:19:06.472 "trtype": "TCP" 00:19:06.472 }, 00:19:06.472 "qid": 0, 00:19:06.472 "state": "enabled", 00:19:06.472 "thread": "nvmf_tgt_poll_group_000" 00:19:06.472 } 00:19:06.472 ]' 00:19:06.472 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.472 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:06.472 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.472 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:06.472 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.472 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.472 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.472 06:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.729 06:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: --dhchap-ctrl-secret DHHC-1:02:NzJhYjM2NzY3Nzk0MWRjMWEzYTE3YWIxZTM2ZjQ1NGFmM2JlM2Y3ODkzNGE5YzFlpOeSKA==: 00:19:06.729 06:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: --dhchap-ctrl-secret DHHC-1:02:NzJhYjM2NzY3Nzk0MWRjMWEzYTE3YWIxZTM2ZjQ1NGFmM2JlM2Y3ODkzNGE5YzFlpOeSKA==: 00:19:07.293 06:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.293 06:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:19:07.293 06:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.293 06:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.293 06:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.293 06:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:07.293 06:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:07.293 06:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:07.549 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:19:07.549 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:07.549 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:07.549 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:07.549 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:07.549 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.549 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.549 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.549 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.549 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.549 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.549 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.549 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.807 00:19:07.807 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:07.807 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:07.807 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.064 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.064 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.064 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.064 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.064 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.064 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.064 { 00:19:08.064 "auth": { 00:19:08.064 "dhgroup": "null", 00:19:08.064 "digest": "sha384", 00:19:08.064 "state": "completed" 00:19:08.064 }, 00:19:08.064 "cntlid": 53, 00:19:08.064 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:19:08.064 "listen_address": { 00:19:08.064 "adrfam": "IPv4", 00:19:08.064 "traddr": "10.0.0.3", 00:19:08.064 "trsvcid": "4420", 00:19:08.064 "trtype": "TCP" 00:19:08.064 }, 00:19:08.064 "peer_address": { 00:19:08.064 "adrfam": "IPv4", 00:19:08.064 "traddr": "10.0.0.1", 00:19:08.064 "trsvcid": "35468", 00:19:08.064 "trtype": "TCP" 00:19:08.064 }, 00:19:08.064 "qid": 0, 00:19:08.064 "state": "enabled", 00:19:08.064 "thread": "nvmf_tgt_poll_group_000" 00:19:08.064 } 00:19:08.064 ]' 00:19:08.064 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.064 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:08.064 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:08.064 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:08.064 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:08.064 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.064 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.064 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.321 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: --dhchap-ctrl-secret DHHC-1:01:YzkxY2Y5ZjgyNzg4OTBjNjI4Yzg5OGZhMDM0MWE5YTDzzxRW: 00:19:08.321 06:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: --dhchap-ctrl-secret DHHC-1:01:YzkxY2Y5ZjgyNzg4OTBjNjI4Yzg5OGZhMDM0MWE5YTDzzxRW: 00:19:08.887 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.888 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:19:08.888 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.888 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.888 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.888 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:08.888 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:08.888 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:09.146 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:19:09.146 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:09.146 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:09.146 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:09.146 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:09.146 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.146 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key3 00:19:09.146 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.146 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.146 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.146 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:09.146 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:09.146 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:09.405 00:19:09.405 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:09.406 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.406 06:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:09.664 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.664 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.664 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.664 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.664 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.664 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:09.664 { 00:19:09.664 "auth": { 00:19:09.664 "dhgroup": "null", 00:19:09.664 "digest": "sha384", 00:19:09.664 "state": "completed" 00:19:09.664 }, 00:19:09.664 "cntlid": 55, 00:19:09.664 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:19:09.664 "listen_address": { 00:19:09.664 "adrfam": "IPv4", 00:19:09.664 "traddr": "10.0.0.3", 00:19:09.664 "trsvcid": "4420", 00:19:09.664 "trtype": "TCP" 00:19:09.664 }, 00:19:09.664 "peer_address": { 00:19:09.664 "adrfam": "IPv4", 00:19:09.664 "traddr": "10.0.0.1", 00:19:09.664 "trsvcid": "35422", 00:19:09.664 "trtype": "TCP" 00:19:09.664 }, 00:19:09.664 "qid": 0, 00:19:09.664 "state": "enabled", 00:19:09.664 "thread": "nvmf_tgt_poll_group_000" 00:19:09.664 } 00:19:09.664 ]' 00:19:09.664 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:09.664 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:09.664 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:09.664 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:09.664 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:09.664 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.664 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.664 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.922 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:19:09.922 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:19:10.487 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.487 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:19:10.487 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.487 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.487 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.487 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:10.487 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:10.487 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:10.487 06:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:10.745 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:19:10.745 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:10.745 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:10.745 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:10.745 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:10.745 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.745 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.745 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.745 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.745 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.745 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.745 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.745 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.003 00:19:11.003 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:11.003 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.003 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:11.261 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.261 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.261 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.261 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.261 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.261 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:11.261 { 00:19:11.261 "auth": { 00:19:11.261 "dhgroup": "ffdhe2048", 00:19:11.261 "digest": "sha384", 00:19:11.261 "state": "completed" 00:19:11.261 }, 00:19:11.261 "cntlid": 57, 00:19:11.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:19:11.261 "listen_address": { 00:19:11.261 "adrfam": "IPv4", 00:19:11.261 "traddr": "10.0.0.3", 00:19:11.261 "trsvcid": "4420", 00:19:11.261 "trtype": "TCP" 00:19:11.261 }, 00:19:11.261 "peer_address": { 00:19:11.261 "adrfam": "IPv4", 00:19:11.261 "traddr": "10.0.0.1", 00:19:11.261 "trsvcid": "35460", 00:19:11.261 "trtype": "TCP" 00:19:11.261 }, 00:19:11.261 "qid": 0, 00:19:11.261 "state": "enabled", 00:19:11.261 "thread": "nvmf_tgt_poll_group_000" 00:19:11.261 } 00:19:11.261 ]' 00:19:11.261 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:11.261 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:11.261 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:11.262 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:11.262 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:11.262 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.262 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.262 06:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.519 06:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTFlNWUxMTE0ODY4ZjJjNzZkNTBhMTJhOGNkZDY5MjcyNDAxMzE1ZDNkMTIwOWUzll6uxg==: --dhchap-ctrl-secret DHHC-1:03:ODkwMTZhNzlkYWI1ZWM0Njk4NWUzODZhZGYzNDFhNzNmYzE4M2I4ODc2NmU2MzdiYmVkYWZhMDE0NDAyODE2YVEi0zs=: 00:19:11.519 06:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:00:OTFlNWUxMTE0ODY4ZjJjNzZkNTBhMTJhOGNkZDY5MjcyNDAxMzE1ZDNkMTIwOWUzll6uxg==: --dhchap-ctrl-secret DHHC-1:03:ODkwMTZhNzlkYWI1ZWM0Njk4NWUzODZhZGYzNDFhNzNmYzE4M2I4ODc2NmU2MzdiYmVkYWZhMDE0NDAyODE2YVEi0zs=: 00:19:12.084 06:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.084 06:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:19:12.084 06:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.084 06:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.085 06:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.085 06:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:12.085 06:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:12.085 06:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:12.342 06:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:19:12.342 06:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:12.342 06:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:12.342 06:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:12.342 06:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:12.342 06:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.342 06:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.342 06:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.342 06:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.342 06:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.342 06:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.342 06:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.342 06:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.600 00:19:12.858 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:12.858 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:12.858 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.858 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.858 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.858 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.858 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.858 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.858 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:12.858 { 00:19:12.858 "auth": { 00:19:12.858 "dhgroup": "ffdhe2048", 00:19:12.858 "digest": "sha384", 00:19:12.858 "state": "completed" 00:19:12.858 }, 00:19:12.858 "cntlid": 59, 00:19:12.858 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:19:12.858 "listen_address": { 00:19:12.858 "adrfam": "IPv4", 00:19:12.858 "traddr": "10.0.0.3", 00:19:12.858 "trsvcid": "4420", 00:19:12.858 "trtype": "TCP" 00:19:12.858 }, 00:19:12.858 "peer_address": { 00:19:12.858 "adrfam": "IPv4", 00:19:12.858 "traddr": "10.0.0.1", 00:19:12.858 "trsvcid": "35492", 00:19:12.858 "trtype": "TCP" 00:19:12.858 }, 00:19:12.858 "qid": 0, 00:19:12.858 "state": "enabled", 00:19:12.858 "thread": "nvmf_tgt_poll_group_000" 00:19:12.858 } 00:19:12.858 ]' 00:19:12.858 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:13.116 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:13.116 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:13.116 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:13.116 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:13.116 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.116 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.116 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.374 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: --dhchap-ctrl-secret DHHC-1:02:NzJhYjM2NzY3Nzk0MWRjMWEzYTE3YWIxZTM2ZjQ1NGFmM2JlM2Y3ODkzNGE5YzFlpOeSKA==: 00:19:13.375 06:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: --dhchap-ctrl-secret DHHC-1:02:NzJhYjM2NzY3Nzk0MWRjMWEzYTE3YWIxZTM2ZjQ1NGFmM2JlM2Y3ODkzNGE5YzFlpOeSKA==: 00:19:13.941 06:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.941 06:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:19:13.941 06:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.941 06:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.941 06:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.941 06:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:13.941 06:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:13.941 06:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:13.941 06:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:19:13.941 06:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:13.941 06:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:13.941 06:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:13.941 06:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:13.941 06:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.941 06:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.941 06:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.941 06:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.941 06:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.941 06:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.941 06:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.941 06:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.264 00:19:14.264 06:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:14.264 06:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:14.264 06:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.522 06:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.522 06:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.522 06:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.522 06:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.522 06:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.522 06:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:14.522 { 00:19:14.522 "auth": { 00:19:14.522 "dhgroup": "ffdhe2048", 00:19:14.522 "digest": "sha384", 00:19:14.522 "state": "completed" 00:19:14.522 }, 00:19:14.522 "cntlid": 61, 00:19:14.522 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:19:14.522 "listen_address": { 00:19:14.522 "adrfam": "IPv4", 00:19:14.522 "traddr": "10.0.0.3", 00:19:14.522 "trsvcid": "4420", 00:19:14.522 "trtype": "TCP" 00:19:14.522 }, 00:19:14.522 "peer_address": { 00:19:14.522 "adrfam": "IPv4", 00:19:14.522 "traddr": "10.0.0.1", 00:19:14.522 "trsvcid": "35512", 00:19:14.522 "trtype": "TCP" 00:19:14.522 }, 00:19:14.522 "qid": 0, 00:19:14.522 "state": "enabled", 00:19:14.522 "thread": "nvmf_tgt_poll_group_000" 00:19:14.522 } 00:19:14.522 ]' 00:19:14.522 06:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:14.522 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:14.522 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:14.522 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:14.522 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:14.780 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.780 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.780 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.780 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: --dhchap-ctrl-secret DHHC-1:01:YzkxY2Y5ZjgyNzg4OTBjNjI4Yzg5OGZhMDM0MWE5YTDzzxRW: 00:19:14.780 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: --dhchap-ctrl-secret DHHC-1:01:YzkxY2Y5ZjgyNzg4OTBjNjI4Yzg5OGZhMDM0MWE5YTDzzxRW: 00:19:15.345 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.345 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:19:15.345 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.345 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.345 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.345 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:15.345 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:15.345 06:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:15.603 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:19:15.603 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:15.603 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:15.603 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:15.603 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:15.603 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.603 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key3 00:19:15.603 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.603 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.603 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.603 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:15.603 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:15.604 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:15.861 00:19:15.861 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:15.861 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:15.861 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.119 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.119 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.119 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.119 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.119 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.119 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:16.119 { 00:19:16.119 "auth": { 00:19:16.119 "dhgroup": "ffdhe2048", 00:19:16.119 "digest": "sha384", 00:19:16.119 "state": "completed" 00:19:16.119 }, 00:19:16.119 "cntlid": 63, 00:19:16.119 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:19:16.119 "listen_address": { 00:19:16.119 "adrfam": "IPv4", 00:19:16.119 "traddr": "10.0.0.3", 00:19:16.119 "trsvcid": "4420", 00:19:16.119 "trtype": "TCP" 00:19:16.119 }, 00:19:16.119 "peer_address": { 00:19:16.119 "adrfam": "IPv4", 00:19:16.119 "traddr": "10.0.0.1", 00:19:16.119 "trsvcid": "35534", 00:19:16.119 "trtype": "TCP" 00:19:16.119 }, 00:19:16.119 "qid": 0, 00:19:16.119 "state": "enabled", 00:19:16.119 "thread": "nvmf_tgt_poll_group_000" 00:19:16.119 } 00:19:16.119 ]' 00:19:16.119 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:16.119 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:16.119 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:16.119 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:16.119 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:16.119 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.119 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.119 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.377 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:19:16.377 06:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:19:16.943 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.943 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:19:16.943 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.943 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.943 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.943 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:16.943 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:16.943 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:16.943 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:17.202 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:19:17.202 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:17.202 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:17.202 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:17.202 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:17.202 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.202 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.202 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.202 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.202 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.202 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.202 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.203 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.461 00:19:17.461 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:17.461 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.461 06:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:17.719 06:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.719 06:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.719 06:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.719 06:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.719 06:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.719 06:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:17.719 { 00:19:17.719 "auth": { 00:19:17.719 "dhgroup": "ffdhe3072", 00:19:17.719 "digest": "sha384", 00:19:17.719 "state": "completed" 00:19:17.719 }, 00:19:17.719 "cntlid": 65, 00:19:17.719 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:19:17.719 "listen_address": { 00:19:17.719 "adrfam": "IPv4", 00:19:17.719 "traddr": "10.0.0.3", 00:19:17.719 "trsvcid": "4420", 00:19:17.719 "trtype": "TCP" 00:19:17.719 }, 00:19:17.719 "peer_address": { 00:19:17.719 "adrfam": "IPv4", 00:19:17.719 "traddr": "10.0.0.1", 00:19:17.719 "trsvcid": "35560", 00:19:17.719 "trtype": "TCP" 00:19:17.719 }, 00:19:17.719 "qid": 0, 00:19:17.719 "state": "enabled", 00:19:17.719 "thread": "nvmf_tgt_poll_group_000" 00:19:17.719 } 00:19:17.719 ]' 00:19:17.719 06:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:17.719 06:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:17.719 06:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:17.977 06:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:17.978 06:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:17.978 06:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.978 06:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.978 06:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.978 06:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTFlNWUxMTE0ODY4ZjJjNzZkNTBhMTJhOGNkZDY5MjcyNDAxMzE1ZDNkMTIwOWUzll6uxg==: --dhchap-ctrl-secret DHHC-1:03:ODkwMTZhNzlkYWI1ZWM0Njk4NWUzODZhZGYzNDFhNzNmYzE4M2I4ODc2NmU2MzdiYmVkYWZhMDE0NDAyODE2YVEi0zs=: 00:19:17.978 06:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:00:OTFlNWUxMTE0ODY4ZjJjNzZkNTBhMTJhOGNkZDY5MjcyNDAxMzE1ZDNkMTIwOWUzll6uxg==: --dhchap-ctrl-secret DHHC-1:03:ODkwMTZhNzlkYWI1ZWM0Njk4NWUzODZhZGYzNDFhNzNmYzE4M2I4ODc2NmU2MzdiYmVkYWZhMDE0NDAyODE2YVEi0zs=: 00:19:18.544 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.544 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:19:18.544 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.544 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.544 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.544 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:18.544 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:18.544 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:18.802 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:19:18.802 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:18.802 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:18.802 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:18.802 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:18.802 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.802 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.802 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.802 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.802 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.802 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.802 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.803 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.061 00:19:19.061 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:19.061 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:19.061 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.319 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.320 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.320 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.320 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.320 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.320 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:19.320 { 00:19:19.320 "auth": { 00:19:19.320 "dhgroup": "ffdhe3072", 00:19:19.320 "digest": "sha384", 00:19:19.320 "state": "completed" 00:19:19.320 }, 00:19:19.320 "cntlid": 67, 00:19:19.320 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:19:19.320 "listen_address": { 00:19:19.320 "adrfam": "IPv4", 00:19:19.320 "traddr": "10.0.0.3", 00:19:19.320 "trsvcid": "4420", 00:19:19.320 "trtype": "TCP" 00:19:19.320 }, 00:19:19.320 "peer_address": { 00:19:19.320 "adrfam": "IPv4", 00:19:19.320 "traddr": "10.0.0.1", 00:19:19.320 "trsvcid": "43010", 00:19:19.320 "trtype": "TCP" 00:19:19.320 }, 00:19:19.320 "qid": 0, 00:19:19.320 "state": "enabled", 00:19:19.320 "thread": "nvmf_tgt_poll_group_000" 00:19:19.320 } 00:19:19.320 ]' 00:19:19.320 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:19.320 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:19.320 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:19.578 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:19.578 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:19.578 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.578 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.578 06:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.578 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: --dhchap-ctrl-secret DHHC-1:02:NzJhYjM2NzY3Nzk0MWRjMWEzYTE3YWIxZTM2ZjQ1NGFmM2JlM2Y3ODkzNGE5YzFlpOeSKA==: 00:19:19.578 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: --dhchap-ctrl-secret DHHC-1:02:NzJhYjM2NzY3Nzk0MWRjMWEzYTE3YWIxZTM2ZjQ1NGFmM2JlM2Y3ODkzNGE5YzFlpOeSKA==: 00:19:20.143 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.401 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:19:20.401 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.401 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.401 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.401 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:20.401 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:20.401 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:20.401 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:19:20.401 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:20.401 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:20.401 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:20.401 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:20.401 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.401 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.401 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.401 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.401 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.401 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.401 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.401 06:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.660 00:19:20.918 06:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:20.918 06:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.918 06:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:20.918 06:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.918 06:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.918 06:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.918 06:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.918 06:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.918 06:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:20.918 { 00:19:20.918 "auth": { 00:19:20.918 "dhgroup": "ffdhe3072", 00:19:20.918 "digest": "sha384", 00:19:20.918 "state": "completed" 00:19:20.918 }, 00:19:20.918 "cntlid": 69, 00:19:20.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:19:20.918 "listen_address": { 00:19:20.918 "adrfam": "IPv4", 00:19:20.918 "traddr": "10.0.0.3", 00:19:20.918 "trsvcid": "4420", 00:19:20.918 "trtype": "TCP" 00:19:20.918 }, 00:19:20.918 "peer_address": { 00:19:20.918 "adrfam": "IPv4", 00:19:20.918 "traddr": "10.0.0.1", 00:19:20.918 "trsvcid": "43046", 00:19:20.918 "trtype": "TCP" 00:19:20.918 }, 00:19:20.918 "qid": 0, 00:19:20.918 "state": "enabled", 00:19:20.918 "thread": "nvmf_tgt_poll_group_000" 00:19:20.918 } 00:19:20.918 ]' 00:19:20.918 06:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:20.918 06:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:20.918 06:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:21.176 06:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:21.176 06:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:21.176 06:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.176 06:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.176 06:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.435 06:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: --dhchap-ctrl-secret DHHC-1:01:YzkxY2Y5ZjgyNzg4OTBjNjI4Yzg5OGZhMDM0MWE5YTDzzxRW: 00:19:21.435 06:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: --dhchap-ctrl-secret DHHC-1:01:YzkxY2Y5ZjgyNzg4OTBjNjI4Yzg5OGZhMDM0MWE5YTDzzxRW: 00:19:22.001 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.001 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:19:22.001 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.001 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.001 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.001 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:22.001 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:22.001 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:22.001 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:19:22.001 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:22.001 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:22.001 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:22.001 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:22.001 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.001 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key3 00:19:22.002 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.002 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.002 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.002 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:22.002 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:22.002 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:22.566 00:19:22.566 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:22.566 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:22.566 06:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.566 06:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.566 06:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.566 06:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.566 06:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.566 06:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.566 06:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:22.566 { 00:19:22.566 "auth": { 00:19:22.566 "dhgroup": "ffdhe3072", 00:19:22.566 "digest": "sha384", 00:19:22.566 "state": "completed" 00:19:22.566 }, 00:19:22.566 "cntlid": 71, 00:19:22.566 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:19:22.566 "listen_address": { 00:19:22.566 "adrfam": "IPv4", 00:19:22.566 "traddr": "10.0.0.3", 00:19:22.566 "trsvcid": "4420", 00:19:22.566 "trtype": "TCP" 00:19:22.566 }, 00:19:22.566 "peer_address": { 00:19:22.566 "adrfam": "IPv4", 00:19:22.566 "traddr": "10.0.0.1", 00:19:22.566 "trsvcid": "43076", 00:19:22.566 "trtype": "TCP" 00:19:22.566 }, 00:19:22.566 "qid": 0, 00:19:22.566 "state": "enabled", 00:19:22.566 "thread": "nvmf_tgt_poll_group_000" 00:19:22.566 } 00:19:22.566 ]' 00:19:22.566 06:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:22.566 06:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:22.566 06:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:22.839 06:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:22.839 06:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:22.839 06:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.839 06:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.839 06:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.097 06:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:19:23.097 06:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:19:23.663 06:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.663 06:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:19:23.663 06:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.663 06:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.663 06:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.663 06:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:23.663 06:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:23.663 06:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:23.663 06:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:23.663 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:19:23.663 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:23.663 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:23.663 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:23.663 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:23.663 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.663 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.663 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.663 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.663 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.663 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.663 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.663 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.936 00:19:24.222 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:24.222 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.222 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:24.222 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.222 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.222 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.222 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.222 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.222 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:24.222 { 00:19:24.222 "auth": { 00:19:24.222 "dhgroup": "ffdhe4096", 00:19:24.222 "digest": "sha384", 00:19:24.222 "state": "completed" 00:19:24.222 }, 00:19:24.222 "cntlid": 73, 00:19:24.222 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:19:24.222 "listen_address": { 00:19:24.222 "adrfam": "IPv4", 00:19:24.222 "traddr": "10.0.0.3", 00:19:24.222 "trsvcid": "4420", 00:19:24.222 "trtype": "TCP" 00:19:24.222 }, 00:19:24.222 "peer_address": { 00:19:24.222 "adrfam": "IPv4", 00:19:24.222 "traddr": "10.0.0.1", 00:19:24.222 "trsvcid": "43104", 00:19:24.222 "trtype": "TCP" 00:19:24.222 }, 00:19:24.222 "qid": 0, 00:19:24.222 "state": "enabled", 00:19:24.222 "thread": "nvmf_tgt_poll_group_000" 00:19:24.222 } 00:19:24.222 ]' 00:19:24.222 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:24.222 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:24.222 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:24.222 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:24.222 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:24.481 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.481 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.481 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.481 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTFlNWUxMTE0ODY4ZjJjNzZkNTBhMTJhOGNkZDY5MjcyNDAxMzE1ZDNkMTIwOWUzll6uxg==: --dhchap-ctrl-secret DHHC-1:03:ODkwMTZhNzlkYWI1ZWM0Njk4NWUzODZhZGYzNDFhNzNmYzE4M2I4ODc2NmU2MzdiYmVkYWZhMDE0NDAyODE2YVEi0zs=: 00:19:24.481 06:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:00:OTFlNWUxMTE0ODY4ZjJjNzZkNTBhMTJhOGNkZDY5MjcyNDAxMzE1ZDNkMTIwOWUzll6uxg==: --dhchap-ctrl-secret DHHC-1:03:ODkwMTZhNzlkYWI1ZWM0Njk4NWUzODZhZGYzNDFhNzNmYzE4M2I4ODc2NmU2MzdiYmVkYWZhMDE0NDAyODE2YVEi0zs=: 00:19:25.047 06:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.047 06:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:19:25.047 06:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.047 06:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.047 06:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.047 06:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:25.047 06:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:25.047 06:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:25.305 06:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:19:25.305 06:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:25.305 06:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:25.305 06:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:25.305 06:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:25.305 06:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.305 06:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.305 06:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.305 06:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.305 06:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.305 06:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.305 06:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.305 06:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.563 00:19:25.821 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:25.821 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.821 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:25.821 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.821 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.821 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.821 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.821 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.821 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:25.821 { 00:19:25.821 "auth": { 00:19:25.821 "dhgroup": "ffdhe4096", 00:19:25.821 "digest": "sha384", 00:19:25.821 "state": "completed" 00:19:25.821 }, 00:19:25.821 "cntlid": 75, 00:19:25.821 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:19:25.821 "listen_address": { 00:19:25.821 "adrfam": "IPv4", 00:19:25.821 "traddr": "10.0.0.3", 00:19:25.821 "trsvcid": "4420", 00:19:25.821 "trtype": "TCP" 00:19:25.821 }, 00:19:25.821 "peer_address": { 00:19:25.821 "adrfam": "IPv4", 00:19:25.821 "traddr": "10.0.0.1", 00:19:25.821 "trsvcid": "43140", 00:19:25.821 "trtype": "TCP" 00:19:25.821 }, 00:19:25.821 "qid": 0, 00:19:25.821 "state": "enabled", 00:19:25.821 "thread": "nvmf_tgt_poll_group_000" 00:19:25.821 } 00:19:25.821 ]' 00:19:25.821 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:26.078 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:26.078 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:26.078 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:26.078 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:26.078 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.079 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.079 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.336 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: --dhchap-ctrl-secret DHHC-1:02:NzJhYjM2NzY3Nzk0MWRjMWEzYTE3YWIxZTM2ZjQ1NGFmM2JlM2Y3ODkzNGE5YzFlpOeSKA==: 00:19:26.336 06:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: --dhchap-ctrl-secret DHHC-1:02:NzJhYjM2NzY3Nzk0MWRjMWEzYTE3YWIxZTM2ZjQ1NGFmM2JlM2Y3ODkzNGE5YzFlpOeSKA==: 00:19:26.902 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.902 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:19:26.902 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.902 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.902 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.902 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:26.902 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:26.902 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:26.902 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:19:26.902 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:26.902 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:26.902 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:26.902 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:26.902 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.902 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.902 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.902 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.902 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.902 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.902 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.902 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.469 00:19:27.469 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:27.469 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:27.469 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.469 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.469 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.469 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.469 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.469 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.469 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:27.469 { 00:19:27.469 "auth": { 00:19:27.469 "dhgroup": "ffdhe4096", 00:19:27.469 "digest": "sha384", 00:19:27.469 "state": "completed" 00:19:27.469 }, 00:19:27.469 "cntlid": 77, 00:19:27.469 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:19:27.469 "listen_address": { 00:19:27.469 "adrfam": "IPv4", 00:19:27.469 "traddr": "10.0.0.3", 00:19:27.469 "trsvcid": "4420", 00:19:27.469 "trtype": "TCP" 00:19:27.469 }, 00:19:27.469 "peer_address": { 00:19:27.469 "adrfam": "IPv4", 00:19:27.469 "traddr": "10.0.0.1", 00:19:27.469 "trsvcid": "43170", 00:19:27.469 "trtype": "TCP" 00:19:27.469 }, 00:19:27.469 "qid": 0, 00:19:27.469 "state": "enabled", 00:19:27.469 "thread": "nvmf_tgt_poll_group_000" 00:19:27.469 } 00:19:27.469 ]' 00:19:27.469 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:27.469 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:27.469 06:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:27.727 06:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:27.727 06:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:27.727 06:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.727 06:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.728 06:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.985 06:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: --dhchap-ctrl-secret DHHC-1:01:YzkxY2Y5ZjgyNzg4OTBjNjI4Yzg5OGZhMDM0MWE5YTDzzxRW: 00:19:27.985 06:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: --dhchap-ctrl-secret DHHC-1:01:YzkxY2Y5ZjgyNzg4OTBjNjI4Yzg5OGZhMDM0MWE5YTDzzxRW: 00:19:28.551 06:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.551 06:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:19:28.551 06:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.551 06:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.551 06:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.551 06:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:28.552 06:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:28.552 06:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:28.809 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:19:28.809 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:28.809 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:28.809 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:28.809 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:28.809 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.809 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key3 00:19:28.809 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.809 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.809 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.809 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:28.810 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:28.810 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:29.067 00:19:29.068 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:29.068 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.068 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:29.325 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.325 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.325 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.325 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.325 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.325 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:29.325 { 00:19:29.325 "auth": { 00:19:29.325 "dhgroup": "ffdhe4096", 00:19:29.325 "digest": "sha384", 00:19:29.325 "state": "completed" 00:19:29.325 }, 00:19:29.325 "cntlid": 79, 00:19:29.325 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:19:29.325 "listen_address": { 00:19:29.325 "adrfam": "IPv4", 00:19:29.325 "traddr": "10.0.0.3", 00:19:29.325 "trsvcid": "4420", 00:19:29.325 "trtype": "TCP" 00:19:29.325 }, 00:19:29.325 "peer_address": { 00:19:29.325 "adrfam": "IPv4", 00:19:29.325 "traddr": "10.0.0.1", 00:19:29.325 "trsvcid": "33584", 00:19:29.325 "trtype": "TCP" 00:19:29.325 }, 00:19:29.325 "qid": 0, 00:19:29.325 "state": "enabled", 00:19:29.325 "thread": "nvmf_tgt_poll_group_000" 00:19:29.325 } 00:19:29.325 ]' 00:19:29.325 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:29.325 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:29.325 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:29.325 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:29.325 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:29.325 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.325 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.325 06:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.583 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:19:29.583 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:19:30.151 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.151 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:19:30.151 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.151 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.151 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.151 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:30.151 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:30.151 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:30.151 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:30.408 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:19:30.408 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:30.408 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:30.408 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:30.408 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:30.408 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.408 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.408 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.408 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.408 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.408 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.408 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.408 06:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.974 00:19:30.974 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:30.974 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.974 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:30.974 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.974 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.974 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.974 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.974 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.974 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:30.974 { 00:19:30.974 "auth": { 00:19:30.974 "dhgroup": "ffdhe6144", 00:19:30.974 "digest": "sha384", 00:19:30.974 "state": "completed" 00:19:30.974 }, 00:19:30.974 "cntlid": 81, 00:19:30.974 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:19:30.974 "listen_address": { 00:19:30.974 "adrfam": "IPv4", 00:19:30.974 "traddr": "10.0.0.3", 00:19:30.974 "trsvcid": "4420", 00:19:30.974 "trtype": "TCP" 00:19:30.974 }, 00:19:30.974 "peer_address": { 00:19:30.974 "adrfam": "IPv4", 00:19:30.974 "traddr": "10.0.0.1", 00:19:30.974 "trsvcid": "33606", 00:19:30.974 "trtype": "TCP" 00:19:30.974 }, 00:19:30.974 "qid": 0, 00:19:30.974 "state": "enabled", 00:19:30.974 "thread": "nvmf_tgt_poll_group_000" 00:19:30.974 } 00:19:30.974 ]' 00:19:30.974 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.231 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:31.231 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.231 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:31.231 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:31.231 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.231 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.231 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.489 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTFlNWUxMTE0ODY4ZjJjNzZkNTBhMTJhOGNkZDY5MjcyNDAxMzE1ZDNkMTIwOWUzll6uxg==: --dhchap-ctrl-secret DHHC-1:03:ODkwMTZhNzlkYWI1ZWM0Njk4NWUzODZhZGYzNDFhNzNmYzE4M2I4ODc2NmU2MzdiYmVkYWZhMDE0NDAyODE2YVEi0zs=: 00:19:31.489 06:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:00:OTFlNWUxMTE0ODY4ZjJjNzZkNTBhMTJhOGNkZDY5MjcyNDAxMzE1ZDNkMTIwOWUzll6uxg==: --dhchap-ctrl-secret DHHC-1:03:ODkwMTZhNzlkYWI1ZWM0Njk4NWUzODZhZGYzNDFhNzNmYzE4M2I4ODc2NmU2MzdiYmVkYWZhMDE0NDAyODE2YVEi0zs=: 00:19:32.054 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.054 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:19:32.054 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.054 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.054 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.054 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:32.054 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:32.054 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:32.312 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:19:32.312 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:32.312 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:32.312 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:32.312 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:32.312 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.312 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.312 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.312 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.312 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.312 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.312 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.312 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.570 00:19:32.570 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:32.570 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:32.570 06:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.828 06:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.828 06:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.828 06:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.828 06:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.828 06:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.828 06:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:32.828 { 00:19:32.828 "auth": { 00:19:32.828 "dhgroup": "ffdhe6144", 00:19:32.828 "digest": "sha384", 00:19:32.828 "state": "completed" 00:19:32.828 }, 00:19:32.828 "cntlid": 83, 00:19:32.828 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:19:32.828 "listen_address": { 00:19:32.828 "adrfam": "IPv4", 00:19:32.828 "traddr": "10.0.0.3", 00:19:32.828 "trsvcid": "4420", 00:19:32.828 "trtype": "TCP" 00:19:32.828 }, 00:19:32.828 "peer_address": { 00:19:32.828 "adrfam": "IPv4", 00:19:32.828 "traddr": "10.0.0.1", 00:19:32.828 "trsvcid": "33636", 00:19:32.828 "trtype": "TCP" 00:19:32.828 }, 00:19:32.828 "qid": 0, 00:19:32.828 "state": "enabled", 00:19:32.828 "thread": "nvmf_tgt_poll_group_000" 00:19:32.828 } 00:19:32.828 ]' 00:19:32.828 06:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:32.828 06:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:32.828 06:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:32.828 06:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:32.828 06:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:32.828 06:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.828 06:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.828 06:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.086 06:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: --dhchap-ctrl-secret DHHC-1:02:NzJhYjM2NzY3Nzk0MWRjMWEzYTE3YWIxZTM2ZjQ1NGFmM2JlM2Y3ODkzNGE5YzFlpOeSKA==: 00:19:33.086 06:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: --dhchap-ctrl-secret DHHC-1:02:NzJhYjM2NzY3Nzk0MWRjMWEzYTE3YWIxZTM2ZjQ1NGFmM2JlM2Y3ODkzNGE5YzFlpOeSKA==: 00:19:33.651 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.651 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:19:33.651 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.651 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.651 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.651 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:33.651 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:33.651 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:33.934 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:19:33.934 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:33.934 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:33.934 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:33.934 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:33.934 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.934 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.934 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.934 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.934 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.934 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.934 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.934 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.192 00:19:34.450 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:34.450 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:34.450 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.450 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.450 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.450 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.450 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.450 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.450 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:34.450 { 00:19:34.450 "auth": { 00:19:34.450 "dhgroup": "ffdhe6144", 00:19:34.450 "digest": "sha384", 00:19:34.450 "state": "completed" 00:19:34.450 }, 00:19:34.450 "cntlid": 85, 00:19:34.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:19:34.450 "listen_address": { 00:19:34.450 "adrfam": "IPv4", 00:19:34.450 "traddr": "10.0.0.3", 00:19:34.450 "trsvcid": "4420", 00:19:34.450 "trtype": "TCP" 00:19:34.450 }, 00:19:34.450 "peer_address": { 00:19:34.450 "adrfam": "IPv4", 00:19:34.450 "traddr": "10.0.0.1", 00:19:34.450 "trsvcid": "33650", 00:19:34.450 "trtype": "TCP" 00:19:34.450 }, 00:19:34.450 "qid": 0, 00:19:34.450 "state": "enabled", 00:19:34.450 "thread": "nvmf_tgt_poll_group_000" 00:19:34.450 } 00:19:34.450 ]' 00:19:34.450 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:34.708 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:34.708 06:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:34.708 06:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:34.708 06:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:34.708 06:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.708 06:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.708 06:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.966 06:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: --dhchap-ctrl-secret DHHC-1:01:YzkxY2Y5ZjgyNzg4OTBjNjI4Yzg5OGZhMDM0MWE5YTDzzxRW: 00:19:34.966 06:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: --dhchap-ctrl-secret DHHC-1:01:YzkxY2Y5ZjgyNzg4OTBjNjI4Yzg5OGZhMDM0MWE5YTDzzxRW: 00:19:35.531 06:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.531 06:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:19:35.531 06:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.531 06:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.531 06:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.531 06:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:35.531 06:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:35.531 06:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:35.531 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:19:35.531 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:35.531 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:35.531 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:35.531 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:35.531 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.531 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key3 00:19:35.531 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.531 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.790 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.790 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:35.790 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:35.790 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:36.046 00:19:36.046 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:36.046 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:36.046 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.304 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.304 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.304 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.304 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.304 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.304 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:36.304 { 00:19:36.304 "auth": { 00:19:36.304 "dhgroup": "ffdhe6144", 00:19:36.304 "digest": "sha384", 00:19:36.304 "state": "completed" 00:19:36.304 }, 00:19:36.304 "cntlid": 87, 00:19:36.304 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:19:36.304 "listen_address": { 00:19:36.304 "adrfam": "IPv4", 00:19:36.304 "traddr": "10.0.0.3", 00:19:36.304 "trsvcid": "4420", 00:19:36.304 "trtype": "TCP" 00:19:36.304 }, 00:19:36.304 "peer_address": { 00:19:36.304 "adrfam": "IPv4", 00:19:36.304 "traddr": "10.0.0.1", 00:19:36.304 "trsvcid": "33674", 00:19:36.304 "trtype": "TCP" 00:19:36.304 }, 00:19:36.304 "qid": 0, 00:19:36.304 "state": "enabled", 00:19:36.304 "thread": "nvmf_tgt_poll_group_000" 00:19:36.304 } 00:19:36.304 ]' 00:19:36.304 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:36.304 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:36.304 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:36.304 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:36.304 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:36.304 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.304 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.304 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.562 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:19:36.562 06:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:19:37.127 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.127 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:19:37.127 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.127 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.127 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.127 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:37.127 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:37.127 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:37.127 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:37.385 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:19:37.385 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.385 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:37.385 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:37.385 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:37.385 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.385 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.385 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.385 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.385 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.385 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.385 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.385 06:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.951 00:19:37.951 06:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.951 06:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.951 06:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.209 06:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.209 06:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.209 06:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.209 06:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.209 06:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.209 06:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:38.209 { 00:19:38.209 "auth": { 00:19:38.209 "dhgroup": "ffdhe8192", 00:19:38.209 "digest": "sha384", 00:19:38.209 "state": "completed" 00:19:38.209 }, 00:19:38.209 "cntlid": 89, 00:19:38.209 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:19:38.209 "listen_address": { 00:19:38.209 "adrfam": "IPv4", 00:19:38.209 "traddr": "10.0.0.3", 00:19:38.209 "trsvcid": "4420", 00:19:38.209 "trtype": "TCP" 00:19:38.209 }, 00:19:38.209 "peer_address": { 00:19:38.209 "adrfam": "IPv4", 00:19:38.209 "traddr": "10.0.0.1", 00:19:38.209 "trsvcid": "33696", 00:19:38.209 "trtype": "TCP" 00:19:38.209 }, 00:19:38.209 "qid": 0, 00:19:38.209 "state": "enabled", 00:19:38.209 "thread": "nvmf_tgt_poll_group_000" 00:19:38.209 } 00:19:38.209 ]' 00:19:38.209 06:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:38.209 06:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:38.209 06:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:38.209 06:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:38.209 06:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:38.209 06:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.209 06:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.209 06:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.466 06:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTFlNWUxMTE0ODY4ZjJjNzZkNTBhMTJhOGNkZDY5MjcyNDAxMzE1ZDNkMTIwOWUzll6uxg==: --dhchap-ctrl-secret DHHC-1:03:ODkwMTZhNzlkYWI1ZWM0Njk4NWUzODZhZGYzNDFhNzNmYzE4M2I4ODc2NmU2MzdiYmVkYWZhMDE0NDAyODE2YVEi0zs=: 00:19:38.466 06:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:00:OTFlNWUxMTE0ODY4ZjJjNzZkNTBhMTJhOGNkZDY5MjcyNDAxMzE1ZDNkMTIwOWUzll6uxg==: --dhchap-ctrl-secret DHHC-1:03:ODkwMTZhNzlkYWI1ZWM0Njk4NWUzODZhZGYzNDFhNzNmYzE4M2I4ODc2NmU2MzdiYmVkYWZhMDE0NDAyODE2YVEi0zs=: 00:19:39.032 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.032 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:19:39.032 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.032 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.032 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.032 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.032 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:39.032 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:39.290 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:19:39.290 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.290 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:39.290 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:39.290 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:39.290 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.290 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.290 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.290 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.290 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.290 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.290 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.290 06:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.855 00:19:39.856 06:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:39.856 06:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.856 06:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.113 06:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.113 06:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.113 06:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.113 06:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.113 06:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.113 06:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:40.113 { 00:19:40.113 "auth": { 00:19:40.113 "dhgroup": "ffdhe8192", 00:19:40.113 "digest": "sha384", 00:19:40.113 "state": "completed" 00:19:40.113 }, 00:19:40.113 "cntlid": 91, 00:19:40.113 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:19:40.113 "listen_address": { 00:19:40.113 "adrfam": "IPv4", 00:19:40.113 "traddr": "10.0.0.3", 00:19:40.113 "trsvcid": "4420", 00:19:40.113 "trtype": "TCP" 00:19:40.113 }, 00:19:40.113 "peer_address": { 00:19:40.113 "adrfam": "IPv4", 00:19:40.113 "traddr": "10.0.0.1", 00:19:40.113 "trsvcid": "55082", 00:19:40.113 "trtype": "TCP" 00:19:40.113 }, 00:19:40.113 "qid": 0, 00:19:40.113 "state": "enabled", 00:19:40.113 "thread": "nvmf_tgt_poll_group_000" 00:19:40.113 } 00:19:40.113 ]' 00:19:40.113 06:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.113 06:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:40.113 06:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.113 06:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:40.113 06:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.113 06:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.113 06:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.113 06:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.371 06:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: --dhchap-ctrl-secret DHHC-1:02:NzJhYjM2NzY3Nzk0MWRjMWEzYTE3YWIxZTM2ZjQ1NGFmM2JlM2Y3ODkzNGE5YzFlpOeSKA==: 00:19:40.371 06:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: --dhchap-ctrl-secret DHHC-1:02:NzJhYjM2NzY3Nzk0MWRjMWEzYTE3YWIxZTM2ZjQ1NGFmM2JlM2Y3ODkzNGE5YzFlpOeSKA==: 00:19:40.937 06:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.937 06:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:19:40.937 06:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.937 06:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.937 06:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.937 06:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:40.937 06:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:40.937 06:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:41.195 06:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:19:41.195 06:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.195 06:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:41.195 06:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:41.195 06:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:41.195 06:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.195 06:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.195 06:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.195 06:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.195 06:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.195 06:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.195 06:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.195 06:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.773 00:19:41.773 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:41.773 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.773 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:41.773 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.773 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.773 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.773 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.773 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.773 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:41.773 { 00:19:41.773 "auth": { 00:19:41.773 "dhgroup": "ffdhe8192", 00:19:41.773 "digest": "sha384", 00:19:41.773 "state": "completed" 00:19:41.773 }, 00:19:41.773 "cntlid": 93, 00:19:41.773 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:19:41.773 "listen_address": { 00:19:41.773 "adrfam": "IPv4", 00:19:41.773 "traddr": "10.0.0.3", 00:19:41.773 "trsvcid": "4420", 00:19:41.773 "trtype": "TCP" 00:19:41.773 }, 00:19:41.773 "peer_address": { 00:19:41.774 "adrfam": "IPv4", 00:19:41.774 "traddr": "10.0.0.1", 00:19:41.774 "trsvcid": "55116", 00:19:41.774 "trtype": "TCP" 00:19:41.774 }, 00:19:41.774 "qid": 0, 00:19:41.774 "state": "enabled", 00:19:41.774 "thread": "nvmf_tgt_poll_group_000" 00:19:41.774 } 00:19:41.774 ]' 00:19:41.774 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.045 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:42.045 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.045 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:42.045 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.045 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.045 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.045 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.304 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: --dhchap-ctrl-secret DHHC-1:01:YzkxY2Y5ZjgyNzg4OTBjNjI4Yzg5OGZhMDM0MWE5YTDzzxRW: 00:19:42.304 06:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: --dhchap-ctrl-secret DHHC-1:01:YzkxY2Y5ZjgyNzg4OTBjNjI4Yzg5OGZhMDM0MWE5YTDzzxRW: 00:19:42.869 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.869 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:19:42.869 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.869 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.869 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.869 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:42.869 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:42.869 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:42.869 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:19:42.870 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.870 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:42.870 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:42.870 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:42.870 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.870 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key3 00:19:42.870 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.870 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.127 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.127 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:43.128 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:43.128 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:43.731 00:19:43.731 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:43.731 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:43.731 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.731 06:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.731 06:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.731 06:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.731 06:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.731 06:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.731 06:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:43.731 { 00:19:43.731 "auth": { 00:19:43.731 "dhgroup": "ffdhe8192", 00:19:43.731 "digest": "sha384", 00:19:43.731 "state": "completed" 00:19:43.731 }, 00:19:43.731 "cntlid": 95, 00:19:43.731 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:19:43.731 "listen_address": { 00:19:43.731 "adrfam": "IPv4", 00:19:43.731 "traddr": "10.0.0.3", 00:19:43.731 "trsvcid": "4420", 00:19:43.731 "trtype": "TCP" 00:19:43.731 }, 00:19:43.731 "peer_address": { 00:19:43.731 "adrfam": "IPv4", 00:19:43.731 "traddr": "10.0.0.1", 00:19:43.731 "trsvcid": "55148", 00:19:43.731 "trtype": "TCP" 00:19:43.731 }, 00:19:43.731 "qid": 0, 00:19:43.731 "state": "enabled", 00:19:43.731 "thread": "nvmf_tgt_poll_group_000" 00:19:43.731 } 00:19:43.731 ]' 00:19:43.731 06:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:43.731 06:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:43.731 06:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:43.731 06:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:43.731 06:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:44.018 06:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.018 06:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.018 06:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.018 06:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:19:44.018 06:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:19:44.585 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.585 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:19:44.585 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.585 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.585 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.585 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:44.585 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:44.585 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.585 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:44.585 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:44.844 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:19:44.844 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.844 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:44.844 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:44.844 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:44.844 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.844 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.844 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.844 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.844 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.844 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.844 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.845 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.102 00:19:45.102 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.102 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.102 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.359 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.359 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.359 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.359 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.359 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.359 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.359 { 00:19:45.359 "auth": { 00:19:45.359 "dhgroup": "null", 00:19:45.359 "digest": "sha512", 00:19:45.359 "state": "completed" 00:19:45.359 }, 00:19:45.359 "cntlid": 97, 00:19:45.359 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:19:45.359 "listen_address": { 00:19:45.359 "adrfam": "IPv4", 00:19:45.359 "traddr": "10.0.0.3", 00:19:45.359 "trsvcid": "4420", 00:19:45.359 "trtype": "TCP" 00:19:45.359 }, 00:19:45.359 "peer_address": { 00:19:45.359 "adrfam": "IPv4", 00:19:45.359 "traddr": "10.0.0.1", 00:19:45.359 "trsvcid": "55172", 00:19:45.359 "trtype": "TCP" 00:19:45.359 }, 00:19:45.359 "qid": 0, 00:19:45.359 "state": "enabled", 00:19:45.360 "thread": "nvmf_tgt_poll_group_000" 00:19:45.360 } 00:19:45.360 ]' 00:19:45.360 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.360 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:45.360 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.360 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:45.360 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.360 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.360 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.360 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.617 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTFlNWUxMTE0ODY4ZjJjNzZkNTBhMTJhOGNkZDY5MjcyNDAxMzE1ZDNkMTIwOWUzll6uxg==: --dhchap-ctrl-secret DHHC-1:03:ODkwMTZhNzlkYWI1ZWM0Njk4NWUzODZhZGYzNDFhNzNmYzE4M2I4ODc2NmU2MzdiYmVkYWZhMDE0NDAyODE2YVEi0zs=: 00:19:45.617 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:00:OTFlNWUxMTE0ODY4ZjJjNzZkNTBhMTJhOGNkZDY5MjcyNDAxMzE1ZDNkMTIwOWUzll6uxg==: --dhchap-ctrl-secret DHHC-1:03:ODkwMTZhNzlkYWI1ZWM0Njk4NWUzODZhZGYzNDFhNzNmYzE4M2I4ODc2NmU2MzdiYmVkYWZhMDE0NDAyODE2YVEi0zs=: 00:19:46.181 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.181 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:19:46.181 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.181 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.181 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.181 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.181 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:46.181 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:46.438 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:19:46.438 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.438 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:46.438 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:46.438 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:46.438 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.438 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.438 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.438 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.438 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.438 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.438 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.438 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.696 00:19:46.696 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.696 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:46.696 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.954 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.954 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.954 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.954 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.954 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.954 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:46.954 { 00:19:46.954 "auth": { 00:19:46.954 "dhgroup": "null", 00:19:46.954 "digest": "sha512", 00:19:46.954 "state": "completed" 00:19:46.954 }, 00:19:46.954 "cntlid": 99, 00:19:46.954 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:19:46.954 "listen_address": { 00:19:46.954 "adrfam": "IPv4", 00:19:46.954 "traddr": "10.0.0.3", 00:19:46.954 "trsvcid": "4420", 00:19:46.954 "trtype": "TCP" 00:19:46.954 }, 00:19:46.954 "peer_address": { 00:19:46.954 "adrfam": "IPv4", 00:19:46.954 "traddr": "10.0.0.1", 00:19:46.954 "trsvcid": "55186", 00:19:46.954 "trtype": "TCP" 00:19:46.954 }, 00:19:46.954 "qid": 0, 00:19:46.954 "state": "enabled", 00:19:46.954 "thread": "nvmf_tgt_poll_group_000" 00:19:46.954 } 00:19:46.954 ]' 00:19:46.954 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:46.954 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:46.954 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:46.954 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:46.954 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:46.954 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.954 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.954 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.212 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: --dhchap-ctrl-secret DHHC-1:02:NzJhYjM2NzY3Nzk0MWRjMWEzYTE3YWIxZTM2ZjQ1NGFmM2JlM2Y3ODkzNGE5YzFlpOeSKA==: 00:19:47.212 06:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: --dhchap-ctrl-secret DHHC-1:02:NzJhYjM2NzY3Nzk0MWRjMWEzYTE3YWIxZTM2ZjQ1NGFmM2JlM2Y3ODkzNGE5YzFlpOeSKA==: 00:19:47.775 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.775 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:19:47.775 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.775 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.775 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.775 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:47.775 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:47.775 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:48.032 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:19:48.032 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.032 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:48.032 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:48.032 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:48.032 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.032 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.032 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.032 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.032 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.032 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.032 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.032 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.290 00:19:48.290 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.290 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.290 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.548 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.548 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.548 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.548 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.548 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.548 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:48.548 { 00:19:48.548 "auth": { 00:19:48.548 "dhgroup": "null", 00:19:48.548 "digest": "sha512", 00:19:48.548 "state": "completed" 00:19:48.548 }, 00:19:48.548 "cntlid": 101, 00:19:48.548 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:19:48.548 "listen_address": { 00:19:48.548 "adrfam": "IPv4", 00:19:48.548 "traddr": "10.0.0.3", 00:19:48.548 "trsvcid": "4420", 00:19:48.548 "trtype": "TCP" 00:19:48.548 }, 00:19:48.548 "peer_address": { 00:19:48.548 "adrfam": "IPv4", 00:19:48.548 "traddr": "10.0.0.1", 00:19:48.548 "trsvcid": "55214", 00:19:48.548 "trtype": "TCP" 00:19:48.548 }, 00:19:48.548 "qid": 0, 00:19:48.548 "state": "enabled", 00:19:48.548 "thread": "nvmf_tgt_poll_group_000" 00:19:48.548 } 00:19:48.548 ]' 00:19:48.548 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:48.548 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:48.548 06:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:48.548 06:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:48.548 06:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:48.548 06:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.548 06:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.548 06:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.806 06:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: --dhchap-ctrl-secret DHHC-1:01:YzkxY2Y5ZjgyNzg4OTBjNjI4Yzg5OGZhMDM0MWE5YTDzzxRW: 00:19:48.806 06:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: --dhchap-ctrl-secret DHHC-1:01:YzkxY2Y5ZjgyNzg4OTBjNjI4Yzg5OGZhMDM0MWE5YTDzzxRW: 00:19:49.374 06:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.374 06:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:19:49.374 06:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.374 06:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.374 06:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.374 06:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:49.374 06:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:49.374 06:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:49.633 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:19:49.633 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:49.633 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:49.633 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:49.633 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:49.633 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.633 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key3 00:19:49.633 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.633 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.633 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.633 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:49.633 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:49.633 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:49.891 00:19:49.891 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.891 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:49.891 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.148 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.148 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.148 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.148 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.148 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.148 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.148 { 00:19:50.148 "auth": { 00:19:50.148 "dhgroup": "null", 00:19:50.148 "digest": "sha512", 00:19:50.148 "state": "completed" 00:19:50.148 }, 00:19:50.148 "cntlid": 103, 00:19:50.148 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:19:50.148 "listen_address": { 00:19:50.148 "adrfam": "IPv4", 00:19:50.148 "traddr": "10.0.0.3", 00:19:50.148 "trsvcid": "4420", 00:19:50.148 "trtype": "TCP" 00:19:50.148 }, 00:19:50.148 "peer_address": { 00:19:50.148 "adrfam": "IPv4", 00:19:50.148 "traddr": "10.0.0.1", 00:19:50.148 "trsvcid": "47062", 00:19:50.148 "trtype": "TCP" 00:19:50.148 }, 00:19:50.148 "qid": 0, 00:19:50.148 "state": "enabled", 00:19:50.148 "thread": "nvmf_tgt_poll_group_000" 00:19:50.148 } 00:19:50.148 ]' 00:19:50.148 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.148 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:50.148 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.405 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:50.405 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.405 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.405 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.405 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.405 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:19:50.405 06:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:19:50.970 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.228 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:19:51.228 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.228 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.229 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.229 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:51.229 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:51.229 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:51.229 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:51.229 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:19:51.229 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.229 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:51.229 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:51.229 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:51.229 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.229 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.229 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.229 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.229 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.229 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.229 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.229 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.487 00:19:51.487 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.487 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.487 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.745 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.745 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.745 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.745 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.745 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.745 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.745 { 00:19:51.745 "auth": { 00:19:51.745 "dhgroup": "ffdhe2048", 00:19:51.745 "digest": "sha512", 00:19:51.745 "state": "completed" 00:19:51.745 }, 00:19:51.745 "cntlid": 105, 00:19:51.745 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:19:51.745 "listen_address": { 00:19:51.745 "adrfam": "IPv4", 00:19:51.745 "traddr": "10.0.0.3", 00:19:51.745 "trsvcid": "4420", 00:19:51.745 "trtype": "TCP" 00:19:51.745 }, 00:19:51.745 "peer_address": { 00:19:51.745 "adrfam": "IPv4", 00:19:51.745 "traddr": "10.0.0.1", 00:19:51.745 "trsvcid": "47092", 00:19:51.745 "trtype": "TCP" 00:19:51.745 }, 00:19:51.745 "qid": 0, 00:19:51.745 "state": "enabled", 00:19:51.745 "thread": "nvmf_tgt_poll_group_000" 00:19:51.745 } 00:19:51.745 ]' 00:19:51.745 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.745 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:51.745 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:52.003 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:52.003 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:52.003 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.003 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.003 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.260 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTFlNWUxMTE0ODY4ZjJjNzZkNTBhMTJhOGNkZDY5MjcyNDAxMzE1ZDNkMTIwOWUzll6uxg==: --dhchap-ctrl-secret DHHC-1:03:ODkwMTZhNzlkYWI1ZWM0Njk4NWUzODZhZGYzNDFhNzNmYzE4M2I4ODc2NmU2MzdiYmVkYWZhMDE0NDAyODE2YVEi0zs=: 00:19:52.260 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:00:OTFlNWUxMTE0ODY4ZjJjNzZkNTBhMTJhOGNkZDY5MjcyNDAxMzE1ZDNkMTIwOWUzll6uxg==: --dhchap-ctrl-secret DHHC-1:03:ODkwMTZhNzlkYWI1ZWM0Njk4NWUzODZhZGYzNDFhNzNmYzE4M2I4ODc2NmU2MzdiYmVkYWZhMDE0NDAyODE2YVEi0zs=: 00:19:52.826 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.826 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:19:52.826 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.826 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.826 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.826 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.826 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:52.826 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:52.826 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:19:52.826 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.826 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:52.826 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:52.826 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:52.826 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.826 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.826 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.826 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.826 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.826 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.826 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.826 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.083 00:19:53.341 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.341 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.341 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.342 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.342 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.342 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.342 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.342 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.342 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.342 { 00:19:53.342 "auth": { 00:19:53.342 "dhgroup": "ffdhe2048", 00:19:53.342 "digest": "sha512", 00:19:53.342 "state": "completed" 00:19:53.342 }, 00:19:53.342 "cntlid": 107, 00:19:53.342 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:19:53.342 "listen_address": { 00:19:53.342 "adrfam": "IPv4", 00:19:53.342 "traddr": "10.0.0.3", 00:19:53.342 "trsvcid": "4420", 00:19:53.342 "trtype": "TCP" 00:19:53.342 }, 00:19:53.342 "peer_address": { 00:19:53.342 "adrfam": "IPv4", 00:19:53.342 "traddr": "10.0.0.1", 00:19:53.342 "trsvcid": "47128", 00:19:53.342 "trtype": "TCP" 00:19:53.342 }, 00:19:53.342 "qid": 0, 00:19:53.342 "state": "enabled", 00:19:53.342 "thread": "nvmf_tgt_poll_group_000" 00:19:53.342 } 00:19:53.342 ]' 00:19:53.342 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.599 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:53.599 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.599 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:53.599 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.599 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.599 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.599 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.856 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: --dhchap-ctrl-secret DHHC-1:02:NzJhYjM2NzY3Nzk0MWRjMWEzYTE3YWIxZTM2ZjQ1NGFmM2JlM2Y3ODkzNGE5YzFlpOeSKA==: 00:19:53.856 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: --dhchap-ctrl-secret DHHC-1:02:NzJhYjM2NzY3Nzk0MWRjMWEzYTE3YWIxZTM2ZjQ1NGFmM2JlM2Y3ODkzNGE5YzFlpOeSKA==: 00:19:54.422 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.422 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:19:54.422 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.422 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.422 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.422 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.422 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:54.422 06:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:54.680 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:19:54.680 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.680 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:54.680 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:54.680 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:54.680 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.680 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.680 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.680 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.680 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.680 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.680 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.680 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.939 00:19:54.939 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:54.939 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.939 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.198 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.198 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.198 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.198 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.198 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.198 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:55.198 { 00:19:55.198 "auth": { 00:19:55.198 "dhgroup": "ffdhe2048", 00:19:55.198 "digest": "sha512", 00:19:55.198 "state": "completed" 00:19:55.198 }, 00:19:55.198 "cntlid": 109, 00:19:55.198 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:19:55.198 "listen_address": { 00:19:55.198 "adrfam": "IPv4", 00:19:55.198 "traddr": "10.0.0.3", 00:19:55.198 "trsvcid": "4420", 00:19:55.198 "trtype": "TCP" 00:19:55.198 }, 00:19:55.198 "peer_address": { 00:19:55.198 "adrfam": "IPv4", 00:19:55.198 "traddr": "10.0.0.1", 00:19:55.198 "trsvcid": "47158", 00:19:55.198 "trtype": "TCP" 00:19:55.198 }, 00:19:55.198 "qid": 0, 00:19:55.198 "state": "enabled", 00:19:55.198 "thread": "nvmf_tgt_poll_group_000" 00:19:55.198 } 00:19:55.198 ]' 00:19:55.198 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:55.198 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:55.198 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:55.198 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:55.198 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:55.198 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.198 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.198 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.456 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: --dhchap-ctrl-secret DHHC-1:01:YzkxY2Y5ZjgyNzg4OTBjNjI4Yzg5OGZhMDM0MWE5YTDzzxRW: 00:19:55.456 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: --dhchap-ctrl-secret DHHC-1:01:YzkxY2Y5ZjgyNzg4OTBjNjI4Yzg5OGZhMDM0MWE5YTDzzxRW: 00:19:56.021 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.021 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:19:56.021 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.021 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.021 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.021 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:56.021 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:56.021 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:56.279 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:19:56.279 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:56.279 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:56.279 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:56.279 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:56.279 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.279 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key3 00:19:56.279 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.279 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.279 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.279 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:56.279 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:56.279 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:56.537 00:19:56.538 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.538 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.538 06:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.795 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.796 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.796 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.796 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.796 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.796 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.796 { 00:19:56.796 "auth": { 00:19:56.796 "dhgroup": "ffdhe2048", 00:19:56.796 "digest": "sha512", 00:19:56.796 "state": "completed" 00:19:56.796 }, 00:19:56.796 "cntlid": 111, 00:19:56.796 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:19:56.796 "listen_address": { 00:19:56.796 "adrfam": "IPv4", 00:19:56.796 "traddr": "10.0.0.3", 00:19:56.796 "trsvcid": "4420", 00:19:56.796 "trtype": "TCP" 00:19:56.796 }, 00:19:56.796 "peer_address": { 00:19:56.796 "adrfam": "IPv4", 00:19:56.796 "traddr": "10.0.0.1", 00:19:56.796 "trsvcid": "47190", 00:19:56.796 "trtype": "TCP" 00:19:56.796 }, 00:19:56.796 "qid": 0, 00:19:56.796 "state": "enabled", 00:19:56.796 "thread": "nvmf_tgt_poll_group_000" 00:19:56.796 } 00:19:56.796 ]' 00:19:56.796 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.796 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:56.796 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.796 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:56.796 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.796 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.796 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.796 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.054 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:19:57.054 06:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:19:57.619 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.620 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:19:57.620 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.620 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.620 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.620 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:57.620 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.620 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:57.620 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:57.878 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:19:57.878 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.878 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:57.878 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:57.878 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:57.878 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.878 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.878 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.878 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.878 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.878 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.878 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.878 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.135 00:19:58.135 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.135 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.135 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.393 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.393 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.393 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.393 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.393 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.393 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.393 { 00:19:58.393 "auth": { 00:19:58.393 "dhgroup": "ffdhe3072", 00:19:58.393 "digest": "sha512", 00:19:58.393 "state": "completed" 00:19:58.393 }, 00:19:58.393 "cntlid": 113, 00:19:58.393 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:19:58.393 "listen_address": { 00:19:58.393 "adrfam": "IPv4", 00:19:58.393 "traddr": "10.0.0.3", 00:19:58.393 "trsvcid": "4420", 00:19:58.393 "trtype": "TCP" 00:19:58.393 }, 00:19:58.393 "peer_address": { 00:19:58.393 "adrfam": "IPv4", 00:19:58.393 "traddr": "10.0.0.1", 00:19:58.393 "trsvcid": "47234", 00:19:58.393 "trtype": "TCP" 00:19:58.393 }, 00:19:58.393 "qid": 0, 00:19:58.393 "state": "enabled", 00:19:58.393 "thread": "nvmf_tgt_poll_group_000" 00:19:58.393 } 00:19:58.393 ]' 00:19:58.393 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.393 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:58.393 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.393 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:58.393 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.393 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.393 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.394 06:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.652 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTFlNWUxMTE0ODY4ZjJjNzZkNTBhMTJhOGNkZDY5MjcyNDAxMzE1ZDNkMTIwOWUzll6uxg==: --dhchap-ctrl-secret DHHC-1:03:ODkwMTZhNzlkYWI1ZWM0Njk4NWUzODZhZGYzNDFhNzNmYzE4M2I4ODc2NmU2MzdiYmVkYWZhMDE0NDAyODE2YVEi0zs=: 00:19:58.652 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:00:OTFlNWUxMTE0ODY4ZjJjNzZkNTBhMTJhOGNkZDY5MjcyNDAxMzE1ZDNkMTIwOWUzll6uxg==: --dhchap-ctrl-secret DHHC-1:03:ODkwMTZhNzlkYWI1ZWM0Njk4NWUzODZhZGYzNDFhNzNmYzE4M2I4ODc2NmU2MzdiYmVkYWZhMDE0NDAyODE2YVEi0zs=: 00:19:59.217 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.217 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:19:59.217 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.217 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.217 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.218 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.218 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:59.218 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:59.476 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:19:59.476 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:59.476 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:59.476 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:59.476 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:59.476 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.476 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.476 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.476 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.476 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.476 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.476 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.476 06:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.733 00:19:59.733 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.733 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.734 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:59.990 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.990 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.990 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.990 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.990 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.990 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:59.990 { 00:19:59.990 "auth": { 00:19:59.990 "dhgroup": "ffdhe3072", 00:19:59.990 "digest": "sha512", 00:19:59.990 "state": "completed" 00:19:59.990 }, 00:19:59.990 "cntlid": 115, 00:19:59.990 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:19:59.990 "listen_address": { 00:19:59.990 "adrfam": "IPv4", 00:19:59.990 "traddr": "10.0.0.3", 00:19:59.990 "trsvcid": "4420", 00:19:59.990 "trtype": "TCP" 00:19:59.990 }, 00:19:59.990 "peer_address": { 00:19:59.990 "adrfam": "IPv4", 00:19:59.990 "traddr": "10.0.0.1", 00:19:59.990 "trsvcid": "37856", 00:19:59.990 "trtype": "TCP" 00:19:59.990 }, 00:19:59.990 "qid": 0, 00:19:59.990 "state": "enabled", 00:19:59.990 "thread": "nvmf_tgt_poll_group_000" 00:19:59.990 } 00:19:59.990 ]' 00:19:59.990 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:59.990 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:59.990 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:59.990 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:59.990 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:59.990 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.990 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.990 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.248 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: --dhchap-ctrl-secret DHHC-1:02:NzJhYjM2NzY3Nzk0MWRjMWEzYTE3YWIxZTM2ZjQ1NGFmM2JlM2Y3ODkzNGE5YzFlpOeSKA==: 00:20:00.248 06:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: --dhchap-ctrl-secret DHHC-1:02:NzJhYjM2NzY3Nzk0MWRjMWEzYTE3YWIxZTM2ZjQ1NGFmM2JlM2Y3ODkzNGE5YzFlpOeSKA==: 00:20:00.813 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.813 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:20:00.813 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.813 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.813 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.813 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:00.813 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:00.813 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:01.071 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:20:01.071 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:01.071 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:01.071 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:01.071 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:01.071 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.071 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.071 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.071 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.071 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.071 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.071 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.071 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.330 00:20:01.330 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:01.330 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.330 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:01.588 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.588 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.588 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.588 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.588 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.588 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:01.588 { 00:20:01.588 "auth": { 00:20:01.588 "dhgroup": "ffdhe3072", 00:20:01.588 "digest": "sha512", 00:20:01.588 "state": "completed" 00:20:01.588 }, 00:20:01.588 "cntlid": 117, 00:20:01.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:20:01.588 "listen_address": { 00:20:01.588 "adrfam": "IPv4", 00:20:01.588 "traddr": "10.0.0.3", 00:20:01.588 "trsvcid": "4420", 00:20:01.588 "trtype": "TCP" 00:20:01.588 }, 00:20:01.588 "peer_address": { 00:20:01.588 "adrfam": "IPv4", 00:20:01.588 "traddr": "10.0.0.1", 00:20:01.588 "trsvcid": "37876", 00:20:01.588 "trtype": "TCP" 00:20:01.588 }, 00:20:01.588 "qid": 0, 00:20:01.588 "state": "enabled", 00:20:01.588 "thread": "nvmf_tgt_poll_group_000" 00:20:01.588 } 00:20:01.588 ]' 00:20:01.588 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:01.588 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:01.588 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:01.588 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:01.588 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:01.588 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.588 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.588 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.845 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: --dhchap-ctrl-secret DHHC-1:01:YzkxY2Y5ZjgyNzg4OTBjNjI4Yzg5OGZhMDM0MWE5YTDzzxRW: 00:20:01.845 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: --dhchap-ctrl-secret DHHC-1:01:YzkxY2Y5ZjgyNzg4OTBjNjI4Yzg5OGZhMDM0MWE5YTDzzxRW: 00:20:02.453 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.453 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:20:02.453 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.453 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.453 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.453 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.453 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:02.453 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:02.712 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:20:02.712 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.712 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:02.712 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:02.712 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:02.712 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.712 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key3 00:20:02.712 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.712 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.712 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.712 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:02.712 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:02.712 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:02.970 00:20:02.970 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.970 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.970 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.227 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.227 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.227 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.227 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.227 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.227 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.227 { 00:20:03.227 "auth": { 00:20:03.227 "dhgroup": "ffdhe3072", 00:20:03.227 "digest": "sha512", 00:20:03.227 "state": "completed" 00:20:03.227 }, 00:20:03.227 "cntlid": 119, 00:20:03.227 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:20:03.227 "listen_address": { 00:20:03.227 "adrfam": "IPv4", 00:20:03.227 "traddr": "10.0.0.3", 00:20:03.227 "trsvcid": "4420", 00:20:03.227 "trtype": "TCP" 00:20:03.227 }, 00:20:03.227 "peer_address": { 00:20:03.227 "adrfam": "IPv4", 00:20:03.227 "traddr": "10.0.0.1", 00:20:03.227 "trsvcid": "37904", 00:20:03.227 "trtype": "TCP" 00:20:03.227 }, 00:20:03.227 "qid": 0, 00:20:03.227 "state": "enabled", 00:20:03.227 "thread": "nvmf_tgt_poll_group_000" 00:20:03.227 } 00:20:03.227 ]' 00:20:03.227 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.227 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:03.227 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.227 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:03.227 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.227 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.227 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.227 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.485 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:20:03.485 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:20:04.051 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.051 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:20:04.051 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.051 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.051 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.051 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:04.051 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.051 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:04.051 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:04.309 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:20:04.309 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.309 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:04.309 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:04.309 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:04.309 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.309 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.309 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.309 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.309 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.309 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.309 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.309 06:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.567 00:20:04.567 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.567 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.567 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.825 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.825 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.825 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.825 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.825 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.825 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.825 { 00:20:04.825 "auth": { 00:20:04.825 "dhgroup": "ffdhe4096", 00:20:04.825 "digest": "sha512", 00:20:04.825 "state": "completed" 00:20:04.825 }, 00:20:04.825 "cntlid": 121, 00:20:04.825 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:20:04.825 "listen_address": { 00:20:04.825 "adrfam": "IPv4", 00:20:04.825 "traddr": "10.0.0.3", 00:20:04.825 "trsvcid": "4420", 00:20:04.825 "trtype": "TCP" 00:20:04.825 }, 00:20:04.825 "peer_address": { 00:20:04.825 "adrfam": "IPv4", 00:20:04.825 "traddr": "10.0.0.1", 00:20:04.825 "trsvcid": "37924", 00:20:04.825 "trtype": "TCP" 00:20:04.825 }, 00:20:04.825 "qid": 0, 00:20:04.825 "state": "enabled", 00:20:04.825 "thread": "nvmf_tgt_poll_group_000" 00:20:04.825 } 00:20:04.825 ]' 00:20:04.825 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.825 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:04.825 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.082 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:05.083 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.083 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.083 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.083 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.341 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTFlNWUxMTE0ODY4ZjJjNzZkNTBhMTJhOGNkZDY5MjcyNDAxMzE1ZDNkMTIwOWUzll6uxg==: --dhchap-ctrl-secret DHHC-1:03:ODkwMTZhNzlkYWI1ZWM0Njk4NWUzODZhZGYzNDFhNzNmYzE4M2I4ODc2NmU2MzdiYmVkYWZhMDE0NDAyODE2YVEi0zs=: 00:20:05.341 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:00:OTFlNWUxMTE0ODY4ZjJjNzZkNTBhMTJhOGNkZDY5MjcyNDAxMzE1ZDNkMTIwOWUzll6uxg==: --dhchap-ctrl-secret DHHC-1:03:ODkwMTZhNzlkYWI1ZWM0Njk4NWUzODZhZGYzNDFhNzNmYzE4M2I4ODc2NmU2MzdiYmVkYWZhMDE0NDAyODE2YVEi0zs=: 00:20:05.906 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.906 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:20:05.906 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.906 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.906 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.906 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.906 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:05.906 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:05.906 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:20:05.906 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.906 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:05.906 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:05.906 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:05.906 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.906 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.906 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.906 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.906 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.906 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.906 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.906 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.471 00:20:06.471 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:06.471 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.471 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:06.471 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.471 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.471 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.471 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.471 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.471 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.471 { 00:20:06.471 "auth": { 00:20:06.471 "dhgroup": "ffdhe4096", 00:20:06.471 "digest": "sha512", 00:20:06.471 "state": "completed" 00:20:06.471 }, 00:20:06.471 "cntlid": 123, 00:20:06.471 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:20:06.471 "listen_address": { 00:20:06.471 "adrfam": "IPv4", 00:20:06.471 "traddr": "10.0.0.3", 00:20:06.471 "trsvcid": "4420", 00:20:06.471 "trtype": "TCP" 00:20:06.471 }, 00:20:06.471 "peer_address": { 00:20:06.471 "adrfam": "IPv4", 00:20:06.471 "traddr": "10.0.0.1", 00:20:06.471 "trsvcid": "37938", 00:20:06.471 "trtype": "TCP" 00:20:06.471 }, 00:20:06.471 "qid": 0, 00:20:06.471 "state": "enabled", 00:20:06.471 "thread": "nvmf_tgt_poll_group_000" 00:20:06.471 } 00:20:06.471 ]' 00:20:06.471 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.471 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:06.471 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.729 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:06.729 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:06.729 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.729 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.729 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.729 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: --dhchap-ctrl-secret DHHC-1:02:NzJhYjM2NzY3Nzk0MWRjMWEzYTE3YWIxZTM2ZjQ1NGFmM2JlM2Y3ODkzNGE5YzFlpOeSKA==: 00:20:06.729 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: --dhchap-ctrl-secret DHHC-1:02:NzJhYjM2NzY3Nzk0MWRjMWEzYTE3YWIxZTM2ZjQ1NGFmM2JlM2Y3ODkzNGE5YzFlpOeSKA==: 00:20:07.295 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.552 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:20:07.553 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.553 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.553 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.553 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:07.553 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:07.553 06:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:07.553 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:20:07.553 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.553 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:07.553 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:07.553 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:07.553 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.553 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.553 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.553 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.553 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.553 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.553 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.553 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.118 00:20:08.118 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.118 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.118 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:08.118 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.118 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.118 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.118 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.118 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.118 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.118 { 00:20:08.118 "auth": { 00:20:08.118 "dhgroup": "ffdhe4096", 00:20:08.118 "digest": "sha512", 00:20:08.118 "state": "completed" 00:20:08.118 }, 00:20:08.118 "cntlid": 125, 00:20:08.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:20:08.118 "listen_address": { 00:20:08.118 "adrfam": "IPv4", 00:20:08.118 "traddr": "10.0.0.3", 00:20:08.118 "trsvcid": "4420", 00:20:08.118 "trtype": "TCP" 00:20:08.118 }, 00:20:08.118 "peer_address": { 00:20:08.118 "adrfam": "IPv4", 00:20:08.118 "traddr": "10.0.0.1", 00:20:08.118 "trsvcid": "37982", 00:20:08.118 "trtype": "TCP" 00:20:08.118 }, 00:20:08.118 "qid": 0, 00:20:08.118 "state": "enabled", 00:20:08.118 "thread": "nvmf_tgt_poll_group_000" 00:20:08.118 } 00:20:08.118 ]' 00:20:08.118 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.118 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:08.118 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:08.118 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:08.118 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.375 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.375 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.375 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.376 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: --dhchap-ctrl-secret DHHC-1:01:YzkxY2Y5ZjgyNzg4OTBjNjI4Yzg5OGZhMDM0MWE5YTDzzxRW: 00:20:08.376 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: --dhchap-ctrl-secret DHHC-1:01:YzkxY2Y5ZjgyNzg4OTBjNjI4Yzg5OGZhMDM0MWE5YTDzzxRW: 00:20:08.939 06:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.939 06:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:20:08.939 06:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.939 06:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.197 06:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.197 06:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.197 06:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:09.197 06:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:09.197 06:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:20:09.197 06:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.197 06:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:09.197 06:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:09.197 06:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:09.197 06:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.197 06:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key3 00:20:09.197 06:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.197 06:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.197 06:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.197 06:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:09.197 06:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:09.197 06:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:09.455 00:20:09.713 06:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.713 06:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.713 06:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.713 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.713 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.713 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.713 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.713 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.713 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.713 { 00:20:09.713 "auth": { 00:20:09.713 "dhgroup": "ffdhe4096", 00:20:09.713 "digest": "sha512", 00:20:09.713 "state": "completed" 00:20:09.713 }, 00:20:09.713 "cntlid": 127, 00:20:09.713 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:20:09.713 "listen_address": { 00:20:09.713 "adrfam": "IPv4", 00:20:09.713 "traddr": "10.0.0.3", 00:20:09.713 "trsvcid": "4420", 00:20:09.713 "trtype": "TCP" 00:20:09.713 }, 00:20:09.713 "peer_address": { 00:20:09.713 "adrfam": "IPv4", 00:20:09.713 "traddr": "10.0.0.1", 00:20:09.713 "trsvcid": "47874", 00:20:09.713 "trtype": "TCP" 00:20:09.713 }, 00:20:09.713 "qid": 0, 00:20:09.713 "state": "enabled", 00:20:09.713 "thread": "nvmf_tgt_poll_group_000" 00:20:09.713 } 00:20:09.713 ]' 00:20:09.713 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.713 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:09.713 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.970 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:09.970 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.970 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.970 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.970 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.226 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:20:10.226 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:20:10.797 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.797 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.797 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:20:10.797 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.797 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.797 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.797 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:10.797 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.797 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:10.797 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:11.055 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:20:11.055 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.055 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:11.055 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:11.055 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:11.055 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.055 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.055 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.055 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.055 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.055 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.055 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.055 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.312 00:20:11.312 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:11.312 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:11.312 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.570 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.570 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.570 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.570 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.570 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.570 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.570 { 00:20:11.570 "auth": { 00:20:11.570 "dhgroup": "ffdhe6144", 00:20:11.570 "digest": "sha512", 00:20:11.570 "state": "completed" 00:20:11.570 }, 00:20:11.570 "cntlid": 129, 00:20:11.570 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:20:11.570 "listen_address": { 00:20:11.570 "adrfam": "IPv4", 00:20:11.570 "traddr": "10.0.0.3", 00:20:11.570 "trsvcid": "4420", 00:20:11.570 "trtype": "TCP" 00:20:11.570 }, 00:20:11.570 "peer_address": { 00:20:11.570 "adrfam": "IPv4", 00:20:11.570 "traddr": "10.0.0.1", 00:20:11.570 "trsvcid": "47900", 00:20:11.570 "trtype": "TCP" 00:20:11.570 }, 00:20:11.570 "qid": 0, 00:20:11.570 "state": "enabled", 00:20:11.570 "thread": "nvmf_tgt_poll_group_000" 00:20:11.570 } 00:20:11.570 ]' 00:20:11.570 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.570 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:11.570 06:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.570 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:11.570 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.570 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.570 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.570 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.828 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTFlNWUxMTE0ODY4ZjJjNzZkNTBhMTJhOGNkZDY5MjcyNDAxMzE1ZDNkMTIwOWUzll6uxg==: --dhchap-ctrl-secret DHHC-1:03:ODkwMTZhNzlkYWI1ZWM0Njk4NWUzODZhZGYzNDFhNzNmYzE4M2I4ODc2NmU2MzdiYmVkYWZhMDE0NDAyODE2YVEi0zs=: 00:20:11.828 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:00:OTFlNWUxMTE0ODY4ZjJjNzZkNTBhMTJhOGNkZDY5MjcyNDAxMzE1ZDNkMTIwOWUzll6uxg==: --dhchap-ctrl-secret DHHC-1:03:ODkwMTZhNzlkYWI1ZWM0Njk4NWUzODZhZGYzNDFhNzNmYzE4M2I4ODc2NmU2MzdiYmVkYWZhMDE0NDAyODE2YVEi0zs=: 00:20:12.393 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.393 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.394 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:20:12.394 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.394 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.394 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.394 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.394 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:12.394 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:12.652 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:20:12.652 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.652 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:12.652 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:12.652 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:12.652 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.652 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.652 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.652 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.652 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.652 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.652 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.652 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.218 00:20:13.218 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:13.218 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.218 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.218 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.218 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.218 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.218 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.218 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.218 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.218 { 00:20:13.218 "auth": { 00:20:13.218 "dhgroup": "ffdhe6144", 00:20:13.218 "digest": "sha512", 00:20:13.218 "state": "completed" 00:20:13.218 }, 00:20:13.218 "cntlid": 131, 00:20:13.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:20:13.218 "listen_address": { 00:20:13.218 "adrfam": "IPv4", 00:20:13.218 "traddr": "10.0.0.3", 00:20:13.218 "trsvcid": "4420", 00:20:13.218 "trtype": "TCP" 00:20:13.218 }, 00:20:13.218 "peer_address": { 00:20:13.218 "adrfam": "IPv4", 00:20:13.218 "traddr": "10.0.0.1", 00:20:13.218 "trsvcid": "47912", 00:20:13.218 "trtype": "TCP" 00:20:13.218 }, 00:20:13.218 "qid": 0, 00:20:13.218 "state": "enabled", 00:20:13.218 "thread": "nvmf_tgt_poll_group_000" 00:20:13.218 } 00:20:13.218 ]' 00:20:13.218 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.218 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:13.218 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.218 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:13.218 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.475 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.476 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.476 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.476 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: --dhchap-ctrl-secret DHHC-1:02:NzJhYjM2NzY3Nzk0MWRjMWEzYTE3YWIxZTM2ZjQ1NGFmM2JlM2Y3ODkzNGE5YzFlpOeSKA==: 00:20:13.476 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: --dhchap-ctrl-secret DHHC-1:02:NzJhYjM2NzY3Nzk0MWRjMWEzYTE3YWIxZTM2ZjQ1NGFmM2JlM2Y3ODkzNGE5YzFlpOeSKA==: 00:20:14.042 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.042 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:20:14.042 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.042 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.300 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.300 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.300 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:14.300 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:14.300 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:20:14.300 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.300 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:14.300 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:14.300 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:14.300 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.300 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.300 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.300 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.300 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.300 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.300 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.300 06:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.864 00:20:14.864 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.864 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.864 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.865 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.865 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.865 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.865 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.865 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.865 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.865 { 00:20:14.865 "auth": { 00:20:14.865 "dhgroup": "ffdhe6144", 00:20:14.865 "digest": "sha512", 00:20:14.865 "state": "completed" 00:20:14.865 }, 00:20:14.865 "cntlid": 133, 00:20:14.865 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:20:14.865 "listen_address": { 00:20:14.865 "adrfam": "IPv4", 00:20:14.865 "traddr": "10.0.0.3", 00:20:14.865 "trsvcid": "4420", 00:20:14.865 "trtype": "TCP" 00:20:14.865 }, 00:20:14.865 "peer_address": { 00:20:14.865 "adrfam": "IPv4", 00:20:14.865 "traddr": "10.0.0.1", 00:20:14.865 "trsvcid": "47930", 00:20:14.865 "trtype": "TCP" 00:20:14.865 }, 00:20:14.865 "qid": 0, 00:20:14.865 "state": "enabled", 00:20:14.865 "thread": "nvmf_tgt_poll_group_000" 00:20:14.865 } 00:20:14.865 ]' 00:20:14.865 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.122 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:15.122 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.122 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:15.122 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.122 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.122 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.122 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.380 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: --dhchap-ctrl-secret DHHC-1:01:YzkxY2Y5ZjgyNzg4OTBjNjI4Yzg5OGZhMDM0MWE5YTDzzxRW: 00:20:15.380 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: --dhchap-ctrl-secret DHHC-1:01:YzkxY2Y5ZjgyNzg4OTBjNjI4Yzg5OGZhMDM0MWE5YTDzzxRW: 00:20:15.943 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.943 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:20:15.943 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.943 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.943 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.943 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.943 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:15.943 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:16.200 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:20:16.200 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.200 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:16.200 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:16.200 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:16.200 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.200 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key3 00:20:16.200 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.200 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.200 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.200 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:16.200 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:16.200 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:16.458 00:20:16.458 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.458 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.458 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.716 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.716 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.716 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.716 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.717 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.717 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.717 { 00:20:16.717 "auth": { 00:20:16.717 "dhgroup": "ffdhe6144", 00:20:16.717 "digest": "sha512", 00:20:16.717 "state": "completed" 00:20:16.717 }, 00:20:16.717 "cntlid": 135, 00:20:16.717 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:20:16.717 "listen_address": { 00:20:16.717 "adrfam": "IPv4", 00:20:16.717 "traddr": "10.0.0.3", 00:20:16.717 "trsvcid": "4420", 00:20:16.717 "trtype": "TCP" 00:20:16.717 }, 00:20:16.717 "peer_address": { 00:20:16.717 "adrfam": "IPv4", 00:20:16.717 "traddr": "10.0.0.1", 00:20:16.717 "trsvcid": "47952", 00:20:16.717 "trtype": "TCP" 00:20:16.717 }, 00:20:16.717 "qid": 0, 00:20:16.717 "state": "enabled", 00:20:16.717 "thread": "nvmf_tgt_poll_group_000" 00:20:16.717 } 00:20:16.717 ]' 00:20:16.717 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.717 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:16.717 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.717 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:16.717 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.717 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.717 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.717 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.974 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:20:16.974 06:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:20:17.539 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.539 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:20:17.539 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.539 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.539 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.539 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:17.539 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.539 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:17.539 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:17.797 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:20:17.797 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.797 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:17.797 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:17.797 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:17.797 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.797 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.797 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.797 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.797 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.797 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.797 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.797 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.361 00:20:18.361 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.361 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.361 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.619 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.619 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.619 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.619 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.619 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.619 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.619 { 00:20:18.619 "auth": { 00:20:18.619 "dhgroup": "ffdhe8192", 00:20:18.619 "digest": "sha512", 00:20:18.619 "state": "completed" 00:20:18.619 }, 00:20:18.619 "cntlid": 137, 00:20:18.619 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:20:18.619 "listen_address": { 00:20:18.619 "adrfam": "IPv4", 00:20:18.619 "traddr": "10.0.0.3", 00:20:18.619 "trsvcid": "4420", 00:20:18.619 "trtype": "TCP" 00:20:18.619 }, 00:20:18.619 "peer_address": { 00:20:18.619 "adrfam": "IPv4", 00:20:18.619 "traddr": "10.0.0.1", 00:20:18.619 "trsvcid": "47976", 00:20:18.619 "trtype": "TCP" 00:20:18.619 }, 00:20:18.619 "qid": 0, 00:20:18.619 "state": "enabled", 00:20:18.619 "thread": "nvmf_tgt_poll_group_000" 00:20:18.619 } 00:20:18.619 ]' 00:20:18.619 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.619 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:18.619 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.619 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:18.619 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.619 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.619 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.619 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.878 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTFlNWUxMTE0ODY4ZjJjNzZkNTBhMTJhOGNkZDY5MjcyNDAxMzE1ZDNkMTIwOWUzll6uxg==: --dhchap-ctrl-secret DHHC-1:03:ODkwMTZhNzlkYWI1ZWM0Njk4NWUzODZhZGYzNDFhNzNmYzE4M2I4ODc2NmU2MzdiYmVkYWZhMDE0NDAyODE2YVEi0zs=: 00:20:18.878 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:00:OTFlNWUxMTE0ODY4ZjJjNzZkNTBhMTJhOGNkZDY5MjcyNDAxMzE1ZDNkMTIwOWUzll6uxg==: --dhchap-ctrl-secret DHHC-1:03:ODkwMTZhNzlkYWI1ZWM0Njk4NWUzODZhZGYzNDFhNzNmYzE4M2I4ODc2NmU2MzdiYmVkYWZhMDE0NDAyODE2YVEi0zs=: 00:20:19.489 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.489 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:20:19.489 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.489 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.489 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.489 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.489 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:19.489 06:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:19.746 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:20:19.746 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.746 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:19.746 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:19.746 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:19.746 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.746 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.746 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.746 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.746 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.746 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.746 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.746 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.309 00:20:20.309 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:20.309 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:20.309 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.567 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.567 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.567 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.567 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.567 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.567 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.567 { 00:20:20.567 "auth": { 00:20:20.567 "dhgroup": "ffdhe8192", 00:20:20.567 "digest": "sha512", 00:20:20.567 "state": "completed" 00:20:20.567 }, 00:20:20.567 "cntlid": 139, 00:20:20.567 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:20:20.567 "listen_address": { 00:20:20.567 "adrfam": "IPv4", 00:20:20.567 "traddr": "10.0.0.3", 00:20:20.567 "trsvcid": "4420", 00:20:20.567 "trtype": "TCP" 00:20:20.567 }, 00:20:20.567 "peer_address": { 00:20:20.567 "adrfam": "IPv4", 00:20:20.567 "traddr": "10.0.0.1", 00:20:20.567 "trsvcid": "44806", 00:20:20.567 "trtype": "TCP" 00:20:20.567 }, 00:20:20.567 "qid": 0, 00:20:20.567 "state": "enabled", 00:20:20.567 "thread": "nvmf_tgt_poll_group_000" 00:20:20.567 } 00:20:20.567 ]' 00:20:20.567 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.568 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:20.568 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.568 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:20.568 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.568 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.568 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.568 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.825 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: --dhchap-ctrl-secret DHHC-1:02:NzJhYjM2NzY3Nzk0MWRjMWEzYTE3YWIxZTM2ZjQ1NGFmM2JlM2Y3ODkzNGE5YzFlpOeSKA==: 00:20:20.825 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: --dhchap-ctrl-secret DHHC-1:02:NzJhYjM2NzY3Nzk0MWRjMWEzYTE3YWIxZTM2ZjQ1NGFmM2JlM2Y3ODkzNGE5YzFlpOeSKA==: 00:20:21.392 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.392 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.392 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:20:21.392 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.392 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.392 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.392 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:21.392 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:21.392 06:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:21.650 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:20:21.650 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.650 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:21.650 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:21.650 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:21.650 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.650 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.650 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.650 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.650 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.650 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.650 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.650 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.214 00:20:22.214 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.214 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.214 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.472 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.472 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.472 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.472 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.472 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.472 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.472 { 00:20:22.472 "auth": { 00:20:22.472 "dhgroup": "ffdhe8192", 00:20:22.472 "digest": "sha512", 00:20:22.472 "state": "completed" 00:20:22.472 }, 00:20:22.472 "cntlid": 141, 00:20:22.472 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:20:22.472 "listen_address": { 00:20:22.472 "adrfam": "IPv4", 00:20:22.472 "traddr": "10.0.0.3", 00:20:22.472 "trsvcid": "4420", 00:20:22.472 "trtype": "TCP" 00:20:22.472 }, 00:20:22.472 "peer_address": { 00:20:22.472 "adrfam": "IPv4", 00:20:22.472 "traddr": "10.0.0.1", 00:20:22.472 "trsvcid": "44836", 00:20:22.472 "trtype": "TCP" 00:20:22.472 }, 00:20:22.472 "qid": 0, 00:20:22.472 "state": "enabled", 00:20:22.472 "thread": "nvmf_tgt_poll_group_000" 00:20:22.472 } 00:20:22.472 ]' 00:20:22.472 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.472 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:22.472 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.472 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:22.472 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.472 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.472 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.472 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.730 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: --dhchap-ctrl-secret DHHC-1:01:YzkxY2Y5ZjgyNzg4OTBjNjI4Yzg5OGZhMDM0MWE5YTDzzxRW: 00:20:22.730 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: --dhchap-ctrl-secret DHHC-1:01:YzkxY2Y5ZjgyNzg4OTBjNjI4Yzg5OGZhMDM0MWE5YTDzzxRW: 00:20:23.295 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.295 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:20:23.295 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.295 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.295 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.295 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.295 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:23.295 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:23.553 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:20:23.553 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.553 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:23.553 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:23.553 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:23.553 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.553 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key3 00:20:23.553 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.553 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.553 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.553 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:23.553 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:23.553 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:24.120 00:20:24.120 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.120 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.120 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.378 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.378 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.378 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.378 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.378 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.378 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.378 { 00:20:24.378 "auth": { 00:20:24.378 "dhgroup": "ffdhe8192", 00:20:24.378 "digest": "sha512", 00:20:24.378 "state": "completed" 00:20:24.378 }, 00:20:24.378 "cntlid": 143, 00:20:24.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:20:24.378 "listen_address": { 00:20:24.378 "adrfam": "IPv4", 00:20:24.378 "traddr": "10.0.0.3", 00:20:24.378 "trsvcid": "4420", 00:20:24.378 "trtype": "TCP" 00:20:24.378 }, 00:20:24.378 "peer_address": { 00:20:24.378 "adrfam": "IPv4", 00:20:24.378 "traddr": "10.0.0.1", 00:20:24.378 "trsvcid": "44862", 00:20:24.378 "trtype": "TCP" 00:20:24.378 }, 00:20:24.378 "qid": 0, 00:20:24.378 "state": "enabled", 00:20:24.378 "thread": "nvmf_tgt_poll_group_000" 00:20:24.378 } 00:20:24.378 ]' 00:20:24.378 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.378 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:24.378 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.378 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:24.378 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.378 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.378 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.378 06:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.635 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:20:24.635 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:20:25.265 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.265 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:20:25.265 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.265 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.265 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.265 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:20:25.265 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:20:25.265 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:20:25.265 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:25.265 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:25.265 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:25.522 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:20:25.522 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.522 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:25.522 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:25.522 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:25.522 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.522 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.522 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.522 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.522 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.522 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.523 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.523 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.087 00:20:26.087 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.087 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.087 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.087 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.087 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.087 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.087 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.087 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.087 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.087 { 00:20:26.087 "auth": { 00:20:26.087 "dhgroup": "ffdhe8192", 00:20:26.087 "digest": "sha512", 00:20:26.087 "state": "completed" 00:20:26.087 }, 00:20:26.087 "cntlid": 145, 00:20:26.087 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:20:26.087 "listen_address": { 00:20:26.087 "adrfam": "IPv4", 00:20:26.087 "traddr": "10.0.0.3", 00:20:26.087 "trsvcid": "4420", 00:20:26.087 "trtype": "TCP" 00:20:26.087 }, 00:20:26.087 "peer_address": { 00:20:26.087 "adrfam": "IPv4", 00:20:26.087 "traddr": "10.0.0.1", 00:20:26.087 "trsvcid": "44888", 00:20:26.087 "trtype": "TCP" 00:20:26.087 }, 00:20:26.087 "qid": 0, 00:20:26.087 "state": "enabled", 00:20:26.087 "thread": "nvmf_tgt_poll_group_000" 00:20:26.087 } 00:20:26.087 ]' 00:20:26.087 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.087 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:26.087 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.344 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:26.344 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.344 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.344 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.344 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.601 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTFlNWUxMTE0ODY4ZjJjNzZkNTBhMTJhOGNkZDY5MjcyNDAxMzE1ZDNkMTIwOWUzll6uxg==: --dhchap-ctrl-secret DHHC-1:03:ODkwMTZhNzlkYWI1ZWM0Njk4NWUzODZhZGYzNDFhNzNmYzE4M2I4ODc2NmU2MzdiYmVkYWZhMDE0NDAyODE2YVEi0zs=: 00:20:26.601 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:00:OTFlNWUxMTE0ODY4ZjJjNzZkNTBhMTJhOGNkZDY5MjcyNDAxMzE1ZDNkMTIwOWUzll6uxg==: --dhchap-ctrl-secret DHHC-1:03:ODkwMTZhNzlkYWI1ZWM0Njk4NWUzODZhZGYzNDFhNzNmYzE4M2I4ODc2NmU2MzdiYmVkYWZhMDE0NDAyODE2YVEi0zs=: 00:20:27.165 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.165 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:20:27.165 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.165 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.165 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.165 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key1 00:20:27.165 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.165 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.165 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.165 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:20:27.165 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:27.165 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:20:27.165 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:27.165 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:27.165 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:27.165 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:27.165 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:20:27.165 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:20:27.165 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:20:27.422 2024/11/27 06:22:46 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:20:27.422 request: 00:20:27.422 { 00:20:27.422 "method": "bdev_nvme_attach_controller", 00:20:27.422 "params": { 00:20:27.422 "name": "nvme0", 00:20:27.422 "trtype": "tcp", 00:20:27.422 "traddr": "10.0.0.3", 00:20:27.422 "adrfam": "ipv4", 00:20:27.422 "trsvcid": "4420", 00:20:27.422 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:27.422 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:20:27.422 "prchk_reftag": false, 00:20:27.422 "prchk_guard": false, 00:20:27.422 "hdgst": false, 00:20:27.422 "ddgst": false, 00:20:27.422 "dhchap_key": "key2", 00:20:27.422 "allow_unrecognized_csi": false 00:20:27.422 } 00:20:27.422 } 00:20:27.422 Got JSON-RPC error response 00:20:27.422 GoRPCClient: error on JSON-RPC call 00:20:27.679 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:27.679 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:27.679 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:27.679 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:27.679 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:20:27.679 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.679 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.679 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.679 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.679 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.679 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.679 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.679 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:27.679 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:27.679 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:27.679 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:27.679 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:27.679 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:27.679 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:27.679 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:27.679 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:27.679 06:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:27.937 2024/11/27 06:22:47 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:20:27.937 request: 00:20:27.937 { 00:20:27.937 "method": "bdev_nvme_attach_controller", 00:20:27.937 "params": { 00:20:27.937 "name": "nvme0", 00:20:27.937 "trtype": "tcp", 00:20:27.937 "traddr": "10.0.0.3", 00:20:27.937 "adrfam": "ipv4", 00:20:27.937 "trsvcid": "4420", 00:20:27.937 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:27.937 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:20:27.937 "prchk_reftag": false, 00:20:27.937 "prchk_guard": false, 00:20:27.937 "hdgst": false, 00:20:27.937 "ddgst": false, 00:20:27.937 "dhchap_key": "key1", 00:20:27.937 "dhchap_ctrlr_key": "ckey2", 00:20:27.937 "allow_unrecognized_csi": false 00:20:27.937 } 00:20:27.937 } 00:20:27.937 Got JSON-RPC error response 00:20:27.937 GoRPCClient: error on JSON-RPC call 00:20:28.194 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:28.194 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:28.194 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:28.194 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:28.194 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:20:28.194 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.194 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.194 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.194 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key1 00:20:28.194 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.194 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.194 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.194 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.194 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:28.194 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.194 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:28.194 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:28.194 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:28.194 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:28.194 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.194 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.194 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.451 2024/11/27 06:22:47 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:20:28.451 request: 00:20:28.451 { 00:20:28.451 "method": "bdev_nvme_attach_controller", 00:20:28.451 "params": { 00:20:28.451 "name": "nvme0", 00:20:28.451 "trtype": "tcp", 00:20:28.451 "traddr": "10.0.0.3", 00:20:28.451 "adrfam": "ipv4", 00:20:28.451 "trsvcid": "4420", 00:20:28.451 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:28.451 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:20:28.451 "prchk_reftag": false, 00:20:28.451 "prchk_guard": false, 00:20:28.451 "hdgst": false, 00:20:28.451 "ddgst": false, 00:20:28.451 "dhchap_key": "key1", 00:20:28.451 "dhchap_ctrlr_key": "ckey1", 00:20:28.451 "allow_unrecognized_csi": false 00:20:28.451 } 00:20:28.451 } 00:20:28.451 Got JSON-RPC error response 00:20:28.451 GoRPCClient: error on JSON-RPC call 00:20:28.451 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:28.451 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:28.451 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:28.451 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:28.451 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:20:28.451 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.451 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.709 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.709 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 92815 00:20:28.709 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 92815 ']' 00:20:28.709 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 92815 00:20:28.709 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:28.709 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:28.709 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92815 00:20:28.709 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:28.709 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:28.709 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92815' 00:20:28.709 killing process with pid 92815 00:20:28.709 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 92815 00:20:28.709 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 92815 00:20:28.709 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:28.709 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:28.709 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:28.709 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.709 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=97399 00:20:28.709 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 97399 00:20:28.709 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 97399 ']' 00:20:28.709 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.709 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:28.709 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.709 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:28.709 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.709 06:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:29.643 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:29.643 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:29.643 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:29.643 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:29.643 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.643 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:29.643 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:29.643 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 97399 00:20:29.643 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 97399 ']' 00:20:29.643 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.643 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:29.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.643 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.643 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:29.643 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.902 null0 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.6yT 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.Q0W ]] 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Q0W 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.mRD 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.xcJ ]] 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.xcJ 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.zdH 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.kH7 ]] 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kH7 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.M6F 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key3 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.902 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.160 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.160 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:30.160 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:30.160 06:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:30.726 nvme0n1 00:20:30.726 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.726 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.726 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.984 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.984 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.984 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.984 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.984 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.984 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.984 { 00:20:30.984 "auth": { 00:20:30.984 "dhgroup": "ffdhe8192", 00:20:30.984 "digest": "sha512", 00:20:30.984 "state": "completed" 00:20:30.984 }, 00:20:30.984 "cntlid": 1, 00:20:30.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:20:30.984 "listen_address": { 00:20:30.984 "adrfam": "IPv4", 00:20:30.984 "traddr": "10.0.0.3", 00:20:30.984 "trsvcid": "4420", 00:20:30.984 "trtype": "TCP" 00:20:30.984 }, 00:20:30.984 "peer_address": { 00:20:30.984 "adrfam": "IPv4", 00:20:30.984 "traddr": "10.0.0.1", 00:20:30.984 "trsvcid": "43758", 00:20:30.984 "trtype": "TCP" 00:20:30.984 }, 00:20:30.984 "qid": 0, 00:20:30.984 "state": "enabled", 00:20:30.984 "thread": "nvmf_tgt_poll_group_000" 00:20:30.984 } 00:20:30.984 ]' 00:20:30.984 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.984 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:30.984 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.241 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:31.241 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.241 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.241 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.241 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.499 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:20:31.499 06:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:20:32.065 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.065 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:20:32.065 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.065 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.065 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.065 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key3 00:20:32.065 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.065 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.065 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.065 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:32.065 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:32.065 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:20:32.065 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:32.065 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:20:32.065 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:32.065 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:32.065 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:32.065 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:32.065 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:32.065 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:32.065 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:32.324 2024/11/27 06:22:51 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:20:32.324 request: 00:20:32.324 { 00:20:32.324 "method": "bdev_nvme_attach_controller", 00:20:32.324 "params": { 00:20:32.324 "name": "nvme0", 00:20:32.324 "trtype": "tcp", 00:20:32.324 "traddr": "10.0.0.3", 00:20:32.324 "adrfam": "ipv4", 00:20:32.324 "trsvcid": "4420", 00:20:32.324 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:32.324 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:20:32.324 "prchk_reftag": false, 00:20:32.324 "prchk_guard": false, 00:20:32.324 "hdgst": false, 00:20:32.324 "ddgst": false, 00:20:32.324 "dhchap_key": "key3", 00:20:32.324 "allow_unrecognized_csi": false 00:20:32.324 } 00:20:32.324 } 00:20:32.324 Got JSON-RPC error response 00:20:32.324 GoRPCClient: error on JSON-RPC call 00:20:32.324 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:32.324 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:32.324 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:32.324 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:32.324 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:20:32.324 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:20:32.324 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:32.324 06:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:32.588 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:20:32.588 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:32.588 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:20:32.588 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:32.588 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:32.588 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:32.588 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:32.588 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:32.588 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:32.588 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:32.850 2024/11/27 06:22:52 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:20:32.850 request: 00:20:32.850 { 00:20:32.850 "method": "bdev_nvme_attach_controller", 00:20:32.850 "params": { 00:20:32.850 "name": "nvme0", 00:20:32.850 "trtype": "tcp", 00:20:32.850 "traddr": "10.0.0.3", 00:20:32.850 "adrfam": "ipv4", 00:20:32.850 "trsvcid": "4420", 00:20:32.850 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:32.850 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:20:32.850 "prchk_reftag": false, 00:20:32.850 "prchk_guard": false, 00:20:32.850 "hdgst": false, 00:20:32.850 "ddgst": false, 00:20:32.850 "dhchap_key": "key3", 00:20:32.850 "allow_unrecognized_csi": false 00:20:32.850 } 00:20:32.850 } 00:20:32.850 Got JSON-RPC error response 00:20:32.850 GoRPCClient: error on JSON-RPC call 00:20:32.850 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:32.850 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:32.850 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:32.850 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:32.850 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:20:32.850 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:20:32.850 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:20:32.850 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:32.850 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:32.850 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:33.108 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:20:33.108 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.108 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.108 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.108 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:20:33.108 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.108 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.108 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.108 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:33.108 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:33.108 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:33.108 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:33.108 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:33.108 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:33.108 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:33.108 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:33.108 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:33.108 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:33.399 2024/11/27 06:22:52 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:20:33.399 request: 00:20:33.399 { 00:20:33.399 "method": "bdev_nvme_attach_controller", 00:20:33.399 "params": { 00:20:33.399 "name": "nvme0", 00:20:33.399 "trtype": "tcp", 00:20:33.399 "traddr": "10.0.0.3", 00:20:33.399 "adrfam": "ipv4", 00:20:33.399 "trsvcid": "4420", 00:20:33.399 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:33.399 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:20:33.399 "prchk_reftag": false, 00:20:33.399 "prchk_guard": false, 00:20:33.399 "hdgst": false, 00:20:33.399 "ddgst": false, 00:20:33.399 "dhchap_key": "key0", 00:20:33.399 "dhchap_ctrlr_key": "key1", 00:20:33.399 "allow_unrecognized_csi": false 00:20:33.399 } 00:20:33.399 } 00:20:33.399 Got JSON-RPC error response 00:20:33.399 GoRPCClient: error on JSON-RPC call 00:20:33.399 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:33.399 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:33.399 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:33.399 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:33.399 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:20:33.399 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:20:33.399 06:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:20:33.671 nvme0n1 00:20:33.671 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:20:33.671 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:20:33.671 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.957 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.957 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.957 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.215 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key1 00:20:34.215 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.215 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.215 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.215 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:20:34.215 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:34.215 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:35.150 nvme0n1 00:20:35.150 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:20:35.150 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.150 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:20:35.150 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.150 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:35.150 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.150 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.150 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.150 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:20:35.150 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.150 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:20:35.409 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.409 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: --dhchap-ctrl-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:20:35.409 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -l 0 --dhchap-secret DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: --dhchap-ctrl-secret DHHC-1:03:YTdjN2M1NDUyMDBhYzRmMDFmNDAwMDZmOTZmYzZmMDZkMWIwY2M2ZWViMTc1YTU0YTA4ZGI5ZDg4NmQxMmI2YWPFLkg=: 00:20:35.973 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:20:35.973 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:20:35.973 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:20:35.973 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:20:35.973 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:20:35.973 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:20:35.973 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:20:35.973 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.973 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.231 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:20:36.231 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:36.231 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:20:36.231 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:36.231 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:36.231 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:36.231 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:36.231 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:20:36.231 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:36.231 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:36.797 2024/11/27 06:22:56 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:20:36.797 request: 00:20:36.797 { 00:20:36.797 "method": "bdev_nvme_attach_controller", 00:20:36.797 "params": { 00:20:36.797 "name": "nvme0", 00:20:36.797 "trtype": "tcp", 00:20:36.797 "traddr": "10.0.0.3", 00:20:36.797 "adrfam": "ipv4", 00:20:36.797 "trsvcid": "4420", 00:20:36.798 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:36.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30", 00:20:36.798 "prchk_reftag": false, 00:20:36.798 "prchk_guard": false, 00:20:36.798 "hdgst": false, 00:20:36.798 "ddgst": false, 00:20:36.798 "dhchap_key": "key1", 00:20:36.798 "allow_unrecognized_csi": false 00:20:36.798 } 00:20:36.798 } 00:20:36.798 Got JSON-RPC error response 00:20:36.798 GoRPCClient: error on JSON-RPC call 00:20:36.798 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:36.798 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:36.798 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:36.798 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:36.798 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:36.798 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:36.798 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:37.731 nvme0n1 00:20:37.731 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:20:37.731 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.731 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:20:37.731 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.731 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.731 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.988 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:20:37.988 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.989 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.989 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.989 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:20:37.989 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:20:37.989 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:20:38.267 nvme0n1 00:20:38.267 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:20:38.267 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:20:38.267 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.524 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.524 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.524 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.782 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:38.782 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.782 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.782 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.782 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: '' 2s 00:20:38.782 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:20:38.782 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:20:38.782 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: 00:20:38.782 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:20:38.782 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:20:38.782 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:20:38.782 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: ]] 00:20:38.782 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MTU5MTE4OThlZGU4YTczZmFhMzhhOTFmNDM1MGJjMDMoWYkt: 00:20:38.782 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:20:38.782 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:20:38.782 06:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:20:40.681 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:20:40.681 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:20:40.681 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:40.681 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:40.681 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:40.681 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:40.681 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:20:40.681 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key1 --dhchap-ctrlr-key key2 00:20:40.681 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.681 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.681 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.681 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: 2s 00:20:40.681 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:20:40.681 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:20:40.681 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:20:40.681 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: 00:20:40.681 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:20:40.681 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:20:40.681 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:20:40.681 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: ]] 00:20:40.681 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YTJhOTQ5NGM4MTVlZGVmNGYzNTJmYjE2MzY2MThjNjI2MDE2ODZmODk2ZjQyOTFjaCid3Q==: 00:20:40.681 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:20:40.681 06:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:20:43.213 06:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:20:43.213 06:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:20:43.213 06:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:43.213 06:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:43.213 06:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:43.213 06:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:43.213 06:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:20:43.213 06:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.213 06:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:43.213 06:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.213 06:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.213 06:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.213 06:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:43.213 06:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:43.213 06:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:43.778 nvme0n1 00:20:43.778 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:43.778 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.778 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.778 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.778 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:43.778 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:44.343 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:20:44.343 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:20:44.343 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.343 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.343 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:20:44.343 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.343 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.343 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.343 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:20:44.343 06:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:20:44.601 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:20:44.601 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:20:44.601 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.859 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.859 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:44.859 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.859 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.859 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.859 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:44.859 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:44.859 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:44.859 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:20:44.859 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:44.859 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:20:44.859 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:44.859 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:44.859 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:45.425 2024/11/27 06:23:04 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key3 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:20:45.425 request: 00:20:45.425 { 00:20:45.425 "method": "bdev_nvme_set_keys", 00:20:45.425 "params": { 00:20:45.425 "name": "nvme0", 00:20:45.425 "dhchap_key": "key1", 00:20:45.425 "dhchap_ctrlr_key": "key3" 00:20:45.425 } 00:20:45.425 } 00:20:45.425 Got JSON-RPC error response 00:20:45.425 GoRPCClient: error on JSON-RPC call 00:20:45.425 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:45.425 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:45.425 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:45.425 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:45.425 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:20:45.425 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.425 06:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:20:45.683 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:20:45.683 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:20:46.616 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:20:46.616 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:20:46.616 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.874 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:20:46.874 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:46.874 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.874 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.874 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.874 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:46.874 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:46.874 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:47.807 nvme0n1 00:20:47.807 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:47.807 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.807 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.807 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.807 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:47.807 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:47.807 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:47.807 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:20:47.807 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:47.807 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:20:47.807 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:47.807 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:47.807 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:48.404 2024/11/27 06:23:07 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key0 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:20:48.404 request: 00:20:48.404 { 00:20:48.404 "method": "bdev_nvme_set_keys", 00:20:48.404 "params": { 00:20:48.404 "name": "nvme0", 00:20:48.404 "dhchap_key": "key2", 00:20:48.404 "dhchap_ctrlr_key": "key0" 00:20:48.404 } 00:20:48.404 } 00:20:48.404 Got JSON-RPC error response 00:20:48.404 GoRPCClient: error on JSON-RPC call 00:20:48.404 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:48.404 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:48.404 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:48.404 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:48.404 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:20:48.404 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:20:48.404 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.404 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:20:48.404 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:20:49.776 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:20:49.776 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:20:49.776 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.777 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:20:49.777 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:20:49.777 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:20:49.777 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 92855 00:20:49.777 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 92855 ']' 00:20:49.777 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 92855 00:20:49.777 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:49.777 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:49.777 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92855 00:20:49.777 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:49.777 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:49.777 killing process with pid 92855 00:20:49.777 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92855' 00:20:49.777 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 92855 00:20:49.777 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 92855 00:20:49.777 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:49.777 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:49.777 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:20:50.034 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:50.034 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:20:50.034 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:50.034 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:50.034 rmmod nvme_tcp 00:20:50.034 rmmod nvme_fabrics 00:20:50.034 rmmod nvme_keyring 00:20:50.034 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:50.034 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:20:50.034 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:20:50.034 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 97399 ']' 00:20:50.034 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 97399 00:20:50.034 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 97399 ']' 00:20:50.034 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 97399 00:20:50.034 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:50.034 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:50.034 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97399 00:20:50.034 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:50.034 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:50.034 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97399' 00:20:50.034 killing process with pid 97399 00:20:50.034 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 97399 00:20:50.034 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 97399 00:20:50.034 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:50.034 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:50.034 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:50.034 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:20:50.034 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:20:50.034 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:50.034 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:20:50.034 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:50.034 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:50.034 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:50.034 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:50.293 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:50.293 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:50.293 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:50.293 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:50.293 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:50.293 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:50.293 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:50.293 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:50.293 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:50.293 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:50.293 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:50.293 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:50.293 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.293 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:50.293 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.293 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:20:50.293 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.6yT /tmp/spdk.key-sha256.mRD /tmp/spdk.key-sha384.zdH /tmp/spdk.key-sha512.M6F /tmp/spdk.key-sha512.Q0W /tmp/spdk.key-sha384.xcJ /tmp/spdk.key-sha256.kH7 '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:20:50.293 00:20:50.293 real 2m34.061s 00:20:50.293 user 6m8.284s 00:20:50.293 sys 0m15.767s 00:20:50.293 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:50.293 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.293 ************************************ 00:20:50.293 END TEST nvmf_auth_target 00:20:50.293 ************************************ 00:20:50.293 06:23:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:20:50.293 06:23:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:50.293 06:23:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:50.293 06:23:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:50.293 06:23:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:50.552 ************************************ 00:20:50.552 START TEST nvmf_bdevio_no_huge 00:20:50.552 ************************************ 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:50.552 * Looking for test storage... 00:20:50.552 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:50.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.552 --rc genhtml_branch_coverage=1 00:20:50.552 --rc genhtml_function_coverage=1 00:20:50.552 --rc genhtml_legend=1 00:20:50.552 --rc geninfo_all_blocks=1 00:20:50.552 --rc geninfo_unexecuted_blocks=1 00:20:50.552 00:20:50.552 ' 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:50.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.552 --rc genhtml_branch_coverage=1 00:20:50.552 --rc genhtml_function_coverage=1 00:20:50.552 --rc genhtml_legend=1 00:20:50.552 --rc geninfo_all_blocks=1 00:20:50.552 --rc geninfo_unexecuted_blocks=1 00:20:50.552 00:20:50.552 ' 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:50.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.552 --rc genhtml_branch_coverage=1 00:20:50.552 --rc genhtml_function_coverage=1 00:20:50.552 --rc genhtml_legend=1 00:20:50.552 --rc geninfo_all_blocks=1 00:20:50.552 --rc geninfo_unexecuted_blocks=1 00:20:50.552 00:20:50.552 ' 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:50.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.552 --rc genhtml_branch_coverage=1 00:20:50.552 --rc genhtml_function_coverage=1 00:20:50.552 --rc genhtml_legend=1 00:20:50.552 --rc geninfo_all_blocks=1 00:20:50.552 --rc geninfo_unexecuted_blocks=1 00:20:50.552 00:20:50.552 ' 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:50.552 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:50.553 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:50.553 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:50.553 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:20:50.553 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:20:50.553 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:50.553 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:50.553 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:50.553 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:50.553 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:50.553 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:20:50.553 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:50.553 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:50.553 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:50.553 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.553 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.553 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.553 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:50.553 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.553 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:20:50.553 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:50.553 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:50.553 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:50.553 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:50.553 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:50.553 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:50.553 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:50.553 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:50.553 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:50.553 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:50.553 Cannot find device "nvmf_init_br" 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:50.553 Cannot find device "nvmf_init_br2" 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:50.553 Cannot find device "nvmf_tgt_br" 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:50.553 Cannot find device "nvmf_tgt_br2" 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:50.553 Cannot find device "nvmf_init_br" 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:50.553 Cannot find device "nvmf_init_br2" 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:50.553 Cannot find device "nvmf_tgt_br" 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:50.553 Cannot find device "nvmf_tgt_br2" 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:20:50.553 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:50.812 Cannot find device "nvmf_br" 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:50.812 Cannot find device "nvmf_init_if" 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:50.812 Cannot find device "nvmf_init_if2" 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:50.812 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:50.812 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:50.812 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:50.812 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:20:50.812 00:20:50.812 --- 10.0.0.3 ping statistics --- 00:20:50.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.812 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:50.812 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:50.812 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.037 ms 00:20:50.812 00:20:50.812 --- 10.0.0.4 ping statistics --- 00:20:50.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.812 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:50.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:50.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:20:50.812 00:20:50.812 --- 10.0.0.1 ping statistics --- 00:20:50.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.812 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:50.812 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:50.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:20:50.812 00:20:50.812 --- 10.0.0.2 ping statistics --- 00:20:50.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.812 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:20:50.812 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:50.813 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:20:50.813 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:50.813 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:50.813 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:50.813 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:50.813 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:50.813 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:50.813 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:50.813 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:50.813 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:50.813 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:50.813 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:50.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.813 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=98241 00:20:50.813 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 98241 00:20:50.813 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 98241 ']' 00:20:50.813 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.813 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:50.813 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.813 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:50.813 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:50.813 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:50.813 [2024-11-27 06:23:10.329888] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:20:50.813 [2024-11-27 06:23:10.330226] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:51.071 [2024-11-27 06:23:10.458452] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:51.071 [2024-11-27 06:23:10.476965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:51.071 [2024-11-27 06:23:10.514332] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:51.071 [2024-11-27 06:23:10.514369] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:51.071 [2024-11-27 06:23:10.514375] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:51.071 [2024-11-27 06:23:10.514381] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:51.071 [2024-11-27 06:23:10.514385] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:51.071 [2024-11-27 06:23:10.514995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:51.071 [2024-11-27 06:23:10.515061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:20:51.071 [2024-11-27 06:23:10.515130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:20:51.071 [2024-11-27 06:23:10.515426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:52.006 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:52.006 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:20:52.006 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:52.006 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:52.006 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:52.006 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:52.006 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:52.006 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.006 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:52.006 [2024-11-27 06:23:11.238746] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:52.006 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.006 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:52.006 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.006 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:52.006 Malloc0 00:20:52.006 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.006 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:52.006 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.006 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:52.006 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.006 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:52.006 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.006 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:52.006 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.006 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:52.006 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.006 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:52.006 [2024-11-27 06:23:11.274857] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:52.006 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.006 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:52.006 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:52.006 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:20:52.006 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:20:52.006 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:52.006 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:52.006 { 00:20:52.006 "params": { 00:20:52.006 "name": "Nvme$subsystem", 00:20:52.006 "trtype": "$TEST_TRANSPORT", 00:20:52.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:52.006 "adrfam": "ipv4", 00:20:52.006 "trsvcid": "$NVMF_PORT", 00:20:52.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:52.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:52.006 "hdgst": ${hdgst:-false}, 00:20:52.006 "ddgst": ${ddgst:-false} 00:20:52.006 }, 00:20:52.006 "method": "bdev_nvme_attach_controller" 00:20:52.006 } 00:20:52.006 EOF 00:20:52.006 )") 00:20:52.006 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:20:52.006 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:20:52.006 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:20:52.006 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:52.006 "params": { 00:20:52.006 "name": "Nvme1", 00:20:52.006 "trtype": "tcp", 00:20:52.006 "traddr": "10.0.0.3", 00:20:52.006 "adrfam": "ipv4", 00:20:52.006 "trsvcid": "4420", 00:20:52.006 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:52.006 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:52.006 "hdgst": false, 00:20:52.006 "ddgst": false 00:20:52.006 }, 00:20:52.006 "method": "bdev_nvme_attach_controller" 00:20:52.006 }' 00:20:52.006 [2024-11-27 06:23:11.315151] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:20:52.006 [2024-11-27 06:23:11.315208] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid98295 ] 00:20:52.006 [2024-11-27 06:23:11.439360] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:52.006 [2024-11-27 06:23:11.459483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:52.006 [2024-11-27 06:23:11.496098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:52.006 [2024-11-27 06:23:11.496167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:52.006 [2024-11-27 06:23:11.496462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.264 I/O targets: 00:20:52.264 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:52.264 00:20:52.264 00:20:52.264 CUnit - A unit testing framework for C - Version 2.1-3 00:20:52.264 http://cunit.sourceforge.net/ 00:20:52.264 00:20:52.264 00:20:52.264 Suite: bdevio tests on: Nvme1n1 00:20:52.264 Test: blockdev write read block ...passed 00:20:52.264 Test: blockdev write zeroes read block ...passed 00:20:52.264 Test: blockdev write zeroes read no split ...passed 00:20:52.264 Test: blockdev write zeroes read split ...passed 00:20:52.264 Test: blockdev write zeroes read split partial ...passed 00:20:52.264 Test: blockdev reset ...[2024-11-27 06:23:11.773211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:52.264 [2024-11-27 06:23:11.773287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22a2d90 (9): Bad file descriptor 00:20:52.264 [2024-11-27 06:23:11.789425] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:20:52.264 passed 00:20:52.264 Test: blockdev write read 8 blocks ...passed 00:20:52.264 Test: blockdev write read size > 128k ...passed 00:20:52.264 Test: blockdev write read invalid size ...passed 00:20:52.522 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:52.523 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:52.523 Test: blockdev write read max offset ...passed 00:20:52.523 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:52.523 Test: blockdev writev readv 8 blocks ...passed 00:20:52.523 Test: blockdev writev readv 30 x 1block ...passed 00:20:52.523 Test: blockdev writev readv block ...passed 00:20:52.523 Test: blockdev writev readv size > 128k ...passed 00:20:52.523 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:52.523 Test: blockdev comparev and writev ...[2024-11-27 06:23:11.958984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:52.523 [2024-11-27 06:23:11.959019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.523 [2024-11-27 06:23:11.959032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:52.523 [2024-11-27 06:23:11.959039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.523 [2024-11-27 06:23:11.959396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:52.523 [2024-11-27 06:23:11.959416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:52.523 [2024-11-27 06:23:11.959428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:52.523 [2024-11-27 06:23:11.959434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:52.523 [2024-11-27 06:23:11.959695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:52.523 [2024-11-27 06:23:11.959715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:52.523 [2024-11-27 06:23:11.959727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:52.523 [2024-11-27 06:23:11.959733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:52.523 [2024-11-27 06:23:11.959964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:52.523 [2024-11-27 06:23:11.959982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:52.523 [2024-11-27 06:23:11.959994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:52.523 [2024-11-27 06:23:11.960000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:52.523 passed 00:20:52.523 Test: blockdev nvme passthru rw ...passed 00:20:52.523 Test: blockdev nvme passthru vendor specific ...[2024-11-27 06:23:12.043068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:52.523 [2024-11-27 06:23:12.043091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:52.523 [2024-11-27 06:23:12.043176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:52.523 [2024-11-27 06:23:12.043183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:52.523 [2024-11-27 06:23:12.043262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:52.523 [2024-11-27 06:23:12.043273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:52.523 [2024-11-27 06:23:12.043351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:52.523 [2024-11-27 06:23:12.043363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:52.523 passed 00:20:52.791 Test: blockdev nvme admin passthru ...passed 00:20:52.791 Test: blockdev copy ...passed 00:20:52.791 00:20:52.791 Run Summary: Type Total Ran Passed Failed Inactive 00:20:52.791 suites 1 1 n/a 0 0 00:20:52.791 tests 23 23 23 0 0 00:20:52.791 asserts 152 152 152 0 n/a 00:20:52.791 00:20:52.791 Elapsed time = 0.886 seconds 00:20:53.068 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:53.068 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.068 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:53.068 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.068 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:53.068 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:53.068 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:53.068 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:20:53.068 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:53.068 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:20:53.068 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:53.068 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:53.068 rmmod nvme_tcp 00:20:53.068 rmmod nvme_fabrics 00:20:53.068 rmmod nvme_keyring 00:20:53.068 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:53.068 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:20:53.068 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:20:53.068 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 98241 ']' 00:20:53.068 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 98241 00:20:53.068 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 98241 ']' 00:20:53.068 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 98241 00:20:53.068 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:20:53.068 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:53.068 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98241 00:20:53.068 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:20:53.068 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:20:53.068 killing process with pid 98241 00:20:53.068 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98241' 00:20:53.068 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 98241 00:20:53.068 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 98241 00:20:53.326 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:53.326 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:53.326 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:53.326 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:20:53.326 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:53.326 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:20:53.326 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:20:53.326 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:53.326 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:53.326 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:53.326 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:53.326 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:53.326 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:53.326 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:53.326 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:53.327 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:53.327 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:53.327 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:53.585 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:53.585 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:53.585 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:53.585 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:53.585 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:53.585 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.585 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:53.585 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.585 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:20:53.585 00:20:53.585 real 0m3.152s 00:20:53.585 user 0m10.573s 00:20:53.585 sys 0m1.080s 00:20:53.585 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:53.585 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:53.585 ************************************ 00:20:53.585 END TEST nvmf_bdevio_no_huge 00:20:53.585 ************************************ 00:20:53.585 06:23:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:53.585 06:23:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:53.585 06:23:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:53.585 06:23:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:53.585 ************************************ 00:20:53.585 START TEST nvmf_tls 00:20:53.585 ************************************ 00:20:53.585 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:53.585 * Looking for test storage... 00:20:53.585 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:53.585 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:53.585 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:53.585 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:20:53.844 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:53.844 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:53.844 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:53.844 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:53.844 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:20:53.844 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:20:53.844 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:20:53.844 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:20:53.844 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:20:53.844 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:20:53.844 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:20:53.844 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:53.844 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:20:53.844 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:20:53.844 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:53.844 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:53.844 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:20:53.844 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:20:53.844 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:53.844 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:20:53.844 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:20:53.844 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:20:53.844 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:20:53.844 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:53.844 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:20:53.844 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:20:53.844 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:53.844 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:53.844 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:20:53.844 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:53.844 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:53.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.844 --rc genhtml_branch_coverage=1 00:20:53.844 --rc genhtml_function_coverage=1 00:20:53.844 --rc genhtml_legend=1 00:20:53.844 --rc geninfo_all_blocks=1 00:20:53.844 --rc geninfo_unexecuted_blocks=1 00:20:53.844 00:20:53.844 ' 00:20:53.844 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:53.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.845 --rc genhtml_branch_coverage=1 00:20:53.845 --rc genhtml_function_coverage=1 00:20:53.845 --rc genhtml_legend=1 00:20:53.845 --rc geninfo_all_blocks=1 00:20:53.845 --rc geninfo_unexecuted_blocks=1 00:20:53.845 00:20:53.845 ' 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:53.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.845 --rc genhtml_branch_coverage=1 00:20:53.845 --rc genhtml_function_coverage=1 00:20:53.845 --rc genhtml_legend=1 00:20:53.845 --rc geninfo_all_blocks=1 00:20:53.845 --rc geninfo_unexecuted_blocks=1 00:20:53.845 00:20:53.845 ' 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:53.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.845 --rc genhtml_branch_coverage=1 00:20:53.845 --rc genhtml_function_coverage=1 00:20:53.845 --rc genhtml_legend=1 00:20:53.845 --rc geninfo_all_blocks=1 00:20:53.845 --rc geninfo_unexecuted_blocks=1 00:20:53.845 00:20:53.845 ' 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:53.845 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:53.845 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:53.846 Cannot find device "nvmf_init_br" 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:53.846 Cannot find device "nvmf_init_br2" 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:53.846 Cannot find device "nvmf_tgt_br" 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:53.846 Cannot find device "nvmf_tgt_br2" 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:53.846 Cannot find device "nvmf_init_br" 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:53.846 Cannot find device "nvmf_init_br2" 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:53.846 Cannot find device "nvmf_tgt_br" 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:53.846 Cannot find device "nvmf_tgt_br2" 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:53.846 Cannot find device "nvmf_br" 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:53.846 Cannot find device "nvmf_init_if" 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:53.846 Cannot find device "nvmf_init_if2" 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:53.846 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:53.846 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:53.846 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:54.105 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:54.105 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:20:54.105 00:20:54.105 --- 10.0.0.3 ping statistics --- 00:20:54.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.105 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:54.105 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:54.105 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:20:54.105 00:20:54.105 --- 10.0.0.4 ping statistics --- 00:20:54.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.105 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:54.105 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:54.105 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:20:54.105 00:20:54.105 --- 10.0.0.1 ping statistics --- 00:20:54.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.105 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:54.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:54.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.031 ms 00:20:54.105 00:20:54.105 --- 10.0.0.2 ping statistics --- 00:20:54.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.105 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:54.105 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:54.106 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:54.106 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:54.106 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.106 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=98529 00:20:54.106 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 98529 00:20:54.106 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 98529 ']' 00:20:54.106 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:54.106 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:54.106 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:54.106 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:54.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:54.106 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:54.106 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.106 [2024-11-27 06:23:13.528161] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:20:54.106 [2024-11-27 06:23:13.528219] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:54.364 [2024-11-27 06:23:13.645490] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:54.364 [2024-11-27 06:23:13.673722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.364 [2024-11-27 06:23:13.691202] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:54.364 [2024-11-27 06:23:13.691236] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:54.364 [2024-11-27 06:23:13.691243] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:54.364 [2024-11-27 06:23:13.691248] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:54.364 [2024-11-27 06:23:13.691252] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:54.364 [2024-11-27 06:23:13.691497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:54.929 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:54.929 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:54.929 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:54.929 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:54.929 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.929 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.929 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:20:54.929 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:55.186 true 00:20:55.186 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:55.186 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:20:55.444 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:20:55.444 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:20:55.444 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:55.703 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:55.703 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:20:55.703 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:20:55.703 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:20:55.703 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:55.962 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:55.962 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:20:56.220 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:20:56.220 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:20:56.220 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:56.220 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:20:56.478 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:20:56.478 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:20:56.478 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:56.478 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:56.478 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:20:56.736 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:20:56.736 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:20:56.736 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:56.997 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:20:56.997 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:57.257 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:20:57.257 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:20:57.257 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:57.257 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:57.257 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:57.257 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:57.257 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:20:57.257 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:57.257 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:57.257 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:57.257 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:57.257 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:57.257 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:57.257 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:57.257 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:20:57.257 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:57.257 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:57.257 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:57.257 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:57.257 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.Pdh6c8pbXP 00:20:57.257 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:20:57.257 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.QNw6wesAx8 00:20:57.257 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:57.258 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:57.258 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Pdh6c8pbXP 00:20:57.258 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.QNw6wesAx8 00:20:57.258 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:57.519 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:20:57.781 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.Pdh6c8pbXP 00:20:57.781 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Pdh6c8pbXP 00:20:57.781 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:58.042 [2024-11-27 06:23:17.328468] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:58.042 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:58.042 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:20:58.303 [2024-11-27 06:23:17.736516] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:58.303 [2024-11-27 06:23:17.736674] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:58.303 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:58.563 malloc0 00:20:58.563 06:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:58.820 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Pdh6c8pbXP 00:20:59.079 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:59.079 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.Pdh6c8pbXP 00:21:11.366 Initializing NVMe Controllers 00:21:11.366 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:11.366 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:11.366 Initialization complete. Launching workers. 00:21:11.366 ======================================================== 00:21:11.366 Latency(us) 00:21:11.366 Device Information : IOPS MiB/s Average min max 00:21:11.366 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17907.76 69.95 3574.11 989.09 4414.37 00:21:11.366 ======================================================== 00:21:11.366 Total : 17907.76 69.95 3574.11 989.09 4414.37 00:21:11.366 00:21:11.366 06:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Pdh6c8pbXP 00:21:11.366 06:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:11.366 06:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:11.366 06:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:11.366 06:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Pdh6c8pbXP 00:21:11.366 06:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:11.366 06:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=98883 00:21:11.366 06:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:11.366 06:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 98883 /var/tmp/bdevperf.sock 00:21:11.366 06:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:11.366 06:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 98883 ']' 00:21:11.366 06:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:11.366 06:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:11.366 06:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:11.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:11.366 06:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:11.366 06:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.366 [2024-11-27 06:23:28.806378] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:21:11.366 [2024-11-27 06:23:28.806437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98883 ] 00:21:11.366 [2024-11-27 06:23:28.921384] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:11.367 [2024-11-27 06:23:28.942198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.367 [2024-11-27 06:23:28.960544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:11.367 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:11.367 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:11.367 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Pdh6c8pbXP 00:21:11.367 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:11.367 [2024-11-27 06:23:29.424081] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:11.367 TLSTESTn1 00:21:11.367 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:11.367 Running I/O for 10 seconds... 00:21:12.309 6302.00 IOPS, 24.62 MiB/s [2024-11-27T06:23:32.782Z] 6751.00 IOPS, 26.37 MiB/s [2024-11-27T06:23:33.724Z] 6918.00 IOPS, 27.02 MiB/s [2024-11-27T06:23:34.665Z] 7000.00 IOPS, 27.34 MiB/s [2024-11-27T06:23:36.044Z] 6984.00 IOPS, 27.28 MiB/s [2024-11-27T06:23:36.613Z] 6976.00 IOPS, 27.25 MiB/s [2024-11-27T06:23:37.995Z] 6976.00 IOPS, 27.25 MiB/s [2024-11-27T06:23:38.938Z] 6963.12 IOPS, 27.20 MiB/s [2024-11-27T06:23:39.922Z] 6956.33 IOPS, 27.17 MiB/s [2024-11-27T06:23:39.922Z] 6962.80 IOPS, 27.20 MiB/s 00:21:20.392 Latency(us) 00:21:20.392 [2024-11-27T06:23:39.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.392 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:20.392 Verification LBA range: start 0x0 length 0x2000 00:21:20.392 TLSTESTn1 : 10.01 6968.86 27.22 0.00 0.00 18331.56 3503.66 16938.54 00:21:20.392 [2024-11-27T06:23:39.922Z] =================================================================================================================== 00:21:20.392 [2024-11-27T06:23:39.922Z] Total : 6968.86 27.22 0.00 0.00 18331.56 3503.66 16938.54 00:21:20.392 { 00:21:20.392 "results": [ 00:21:20.392 { 00:21:20.392 "job": "TLSTESTn1", 00:21:20.392 "core_mask": "0x4", 00:21:20.392 "workload": "verify", 00:21:20.392 "status": "finished", 00:21:20.392 "verify_range": { 00:21:20.392 "start": 0, 00:21:20.392 "length": 8192 00:21:20.392 }, 00:21:20.392 "queue_depth": 128, 00:21:20.392 "io_size": 4096, 00:21:20.392 "runtime": 10.00924, 00:21:20.392 "iops": 6968.860772646075, 00:21:20.392 "mibps": 27.22211239314873, 00:21:20.392 "io_failed": 0, 00:21:20.392 "io_timeout": 0, 00:21:20.392 "avg_latency_us": 18331.557797017827, 00:21:20.392 "min_latency_us": 3503.6553846153847, 00:21:20.392 "max_latency_us": 16938.535384615385 00:21:20.392 } 00:21:20.392 ], 00:21:20.392 "core_count": 1 00:21:20.392 } 00:21:20.392 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:20.392 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 98883 00:21:20.392 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 98883 ']' 00:21:20.392 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 98883 00:21:20.392 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:20.392 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:20.392 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98883 00:21:20.392 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:20.392 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:20.392 killing process with pid 98883 00:21:20.392 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98883' 00:21:20.392 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 98883 00:21:20.392 Received shutdown signal, test time was about 10.000000 seconds 00:21:20.392 00:21:20.392 Latency(us) 00:21:20.392 [2024-11-27T06:23:39.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.392 [2024-11-27T06:23:39.922Z] =================================================================================================================== 00:21:20.392 [2024-11-27T06:23:39.922Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:20.392 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 98883 00:21:20.392 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QNw6wesAx8 00:21:20.392 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:20.392 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QNw6wesAx8 00:21:20.392 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:20.392 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:20.392 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:20.392 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:20.392 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QNw6wesAx8 00:21:20.392 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:20.392 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:20.392 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:20.392 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.QNw6wesAx8 00:21:20.392 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:20.392 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99029 00:21:20.392 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:20.392 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99029 /var/tmp/bdevperf.sock 00:21:20.392 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:20.392 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 99029 ']' 00:21:20.392 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:20.392 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:20.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:20.392 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:20.392 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:20.392 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.392 [2024-11-27 06:23:39.802729] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:21:20.392 [2024-11-27 06:23:39.802794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99029 ] 00:21:20.392 [2024-11-27 06:23:39.917398] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:20.654 [2024-11-27 06:23:39.939151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.654 [2024-11-27 06:23:39.957399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:21.226 06:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:21.226 06:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:21.226 06:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QNw6wesAx8 00:21:21.486 06:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:21.746 [2024-11-27 06:23:41.028473] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:21.746 [2024-11-27 06:23:41.037999] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:21.746 [2024-11-27 06:23:41.038399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a20d0 (107): Transport endpoint is not connected 00:21:21.746 [2024-11-27 06:23:41.039389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a20d0 (9): Bad file descriptor 00:21:21.746 [2024-11-27 06:23:41.040387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:21:21.746 [2024-11-27 06:23:41.040433] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:21:21.746 [2024-11-27 06:23:41.040466] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:21:21.746 [2024-11-27 06:23:41.040501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:21:21.747 2024/11/27 06:23:41 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:21:21.747 request: 00:21:21.747 { 00:21:21.747 "method": "bdev_nvme_attach_controller", 00:21:21.747 "params": { 00:21:21.747 "name": "TLSTEST", 00:21:21.747 "trtype": "tcp", 00:21:21.747 "traddr": "10.0.0.3", 00:21:21.747 "adrfam": "ipv4", 00:21:21.747 "trsvcid": "4420", 00:21:21.747 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.747 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:21.747 "prchk_reftag": false, 00:21:21.747 "prchk_guard": false, 00:21:21.747 "hdgst": false, 00:21:21.747 "ddgst": false, 00:21:21.747 "psk": "key0", 00:21:21.747 "allow_unrecognized_csi": false 00:21:21.747 } 00:21:21.747 } 00:21:21.747 Got JSON-RPC error response 00:21:21.747 GoRPCClient: error on JSON-RPC call 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 99029 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 99029 ']' 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 99029 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99029 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:21.747 killing process with pid 99029 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99029' 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 99029 00:21:21.747 Received shutdown signal, test time was about 10.000000 seconds 00:21:21.747 00:21:21.747 Latency(us) 00:21:21.747 [2024-11-27T06:23:41.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.747 [2024-11-27T06:23:41.277Z] =================================================================================================================== 00:21:21.747 [2024-11-27T06:23:41.277Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 99029 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Pdh6c8pbXP 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Pdh6c8pbXP 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Pdh6c8pbXP 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Pdh6c8pbXP 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99080 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99080 /var/tmp/bdevperf.sock 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 99080 ']' 00:21:21.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:21.747 06:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.747 [2024-11-27 06:23:41.224672] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:21:21.747 [2024-11-27 06:23:41.224750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99080 ] 00:21:22.007 [2024-11-27 06:23:41.339816] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:22.007 [2024-11-27 06:23:41.365715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.007 [2024-11-27 06:23:41.383320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:22.580 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:22.580 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:22.580 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Pdh6c8pbXP 00:21:23.153 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:21:23.153 [2024-11-27 06:23:42.560850] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:23.153 [2024-11-27 06:23:42.568080] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:23.153 [2024-11-27 06:23:42.568108] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:23.153 [2024-11-27 06:23:42.568137] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:23.153 [2024-11-27 06:23:42.568777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c510d0 (107): Transport endpoint is not connected 00:21:23.153 [2024-11-27 06:23:42.569766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c510d0 (9): Bad file descriptor 00:21:23.153 [2024-11-27 06:23:42.570765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:21:23.153 [2024-11-27 06:23:42.570805] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:21:23.153 [2024-11-27 06:23:42.570834] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:21:23.153 [2024-11-27 06:23:42.570865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:21:23.153 2024/11/27 06:23:42 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:21:23.153 request: 00:21:23.153 { 00:21:23.153 "method": "bdev_nvme_attach_controller", 00:21:23.153 "params": { 00:21:23.153 "name": "TLSTEST", 00:21:23.153 "trtype": "tcp", 00:21:23.153 "traddr": "10.0.0.3", 00:21:23.153 "adrfam": "ipv4", 00:21:23.153 "trsvcid": "4420", 00:21:23.153 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.153 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:23.153 "prchk_reftag": false, 00:21:23.154 "prchk_guard": false, 00:21:23.154 "hdgst": false, 00:21:23.154 "ddgst": false, 00:21:23.154 "psk": "key0", 00:21:23.154 "allow_unrecognized_csi": false 00:21:23.154 } 00:21:23.154 } 00:21:23.154 Got JSON-RPC error response 00:21:23.154 GoRPCClient: error on JSON-RPC call 00:21:23.154 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 99080 00:21:23.154 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 99080 ']' 00:21:23.154 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 99080 00:21:23.154 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:23.154 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:23.154 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99080 00:21:23.154 killing process with pid 99080 00:21:23.154 Received shutdown signal, test time was about 10.000000 seconds 00:21:23.154 00:21:23.154 Latency(us) 00:21:23.154 [2024-11-27T06:23:42.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.154 [2024-11-27T06:23:42.684Z] =================================================================================================================== 00:21:23.154 [2024-11-27T06:23:42.684Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:23.154 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:23.154 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:23.154 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99080' 00:21:23.154 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 99080 00:21:23.154 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 99080 00:21:23.415 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:23.415 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:23.415 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:23.415 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:23.415 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:23.415 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Pdh6c8pbXP 00:21:23.415 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:23.415 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Pdh6c8pbXP 00:21:23.415 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:23.415 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:23.415 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:23.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:23.415 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:23.415 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Pdh6c8pbXP 00:21:23.416 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:23.416 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:23.416 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:23.416 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Pdh6c8pbXP 00:21:23.416 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:23.416 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99134 00:21:23.416 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:23.416 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99134 /var/tmp/bdevperf.sock 00:21:23.416 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 99134 ']' 00:21:23.416 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:23.416 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:23.416 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:23.416 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:23.416 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.416 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:23.416 [2024-11-27 06:23:42.750049] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:21:23.416 [2024-11-27 06:23:42.750114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99134 ] 00:21:23.416 [2024-11-27 06:23:42.863983] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:23.416 [2024-11-27 06:23:42.889292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.416 [2024-11-27 06:23:42.907296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:24.359 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:24.359 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:24.359 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Pdh6c8pbXP 00:21:24.359 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:24.621 [2024-11-27 06:23:43.938347] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:24.621 [2024-11-27 06:23:43.948232] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:24.621 [2024-11-27 06:23:43.948259] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:24.621 [2024-11-27 06:23:43.948288] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:24.621 [2024-11-27 06:23:43.949263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c80d0 (107): Transport endpoint is not connected 00:21:24.621 [2024-11-27 06:23:43.950253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c80d0 (9): Bad file descriptor 00:21:24.621 [2024-11-27 06:23:43.951251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:21:24.621 [2024-11-27 06:23:43.951302] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:21:24.621 [2024-11-27 06:23:43.951332] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:21:24.621 [2024-11-27 06:23:43.951370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:21:24.621 2024/11/27 06:23:43 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:21:24.621 request: 00:21:24.621 { 00:21:24.621 "method": "bdev_nvme_attach_controller", 00:21:24.621 "params": { 00:21:24.621 "name": "TLSTEST", 00:21:24.621 "trtype": "tcp", 00:21:24.621 "traddr": "10.0.0.3", 00:21:24.621 "adrfam": "ipv4", 00:21:24.621 "trsvcid": "4420", 00:21:24.621 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:24.621 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:24.621 "prchk_reftag": false, 00:21:24.621 "prchk_guard": false, 00:21:24.621 "hdgst": false, 00:21:24.621 "ddgst": false, 00:21:24.621 "psk": "key0", 00:21:24.621 "allow_unrecognized_csi": false 00:21:24.621 } 00:21:24.621 } 00:21:24.621 Got JSON-RPC error response 00:21:24.621 GoRPCClient: error on JSON-RPC call 00:21:24.621 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 99134 00:21:24.621 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 99134 ']' 00:21:24.621 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 99134 00:21:24.621 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:24.621 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:24.621 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99134 00:21:24.621 killing process with pid 99134 00:21:24.621 Received shutdown signal, test time was about 10.000000 seconds 00:21:24.621 00:21:24.621 Latency(us) 00:21:24.621 [2024-11-27T06:23:44.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.621 [2024-11-27T06:23:44.151Z] =================================================================================================================== 00:21:24.621 [2024-11-27T06:23:44.151Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:24.621 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:24.621 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:24.621 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99134' 00:21:24.621 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 99134 00:21:24.621 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 99134 00:21:24.621 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:24.621 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:24.621 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:24.621 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:24.621 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:24.621 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:24.621 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:24.621 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:24.621 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:24.621 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:24.621 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:24.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:24.621 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:24.621 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:24.621 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:24.621 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:24.621 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:24.621 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:24.621 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:24.621 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99181 00:21:24.621 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:24.621 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99181 /var/tmp/bdevperf.sock 00:21:24.621 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 99181 ']' 00:21:24.621 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:24.622 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:24.622 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:24.622 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:24.622 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:24.622 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:24.622 [2024-11-27 06:23:44.135310] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:21:24.622 [2024-11-27 06:23:44.135370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99181 ] 00:21:24.883 [2024-11-27 06:23:44.248713] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:24.883 [2024-11-27 06:23:44.273514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.883 [2024-11-27 06:23:44.291227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:25.827 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:25.827 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:25.827 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:21:25.827 [2024-11-27 06:23:45.192663] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:21:25.828 [2024-11-27 06:23:45.192937] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:25.828 2024/11/27 06:23:45 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:21:25.828 request: 00:21:25.828 { 00:21:25.828 "method": "keyring_file_add_key", 00:21:25.828 "params": { 00:21:25.828 "name": "key0", 00:21:25.828 "path": "" 00:21:25.828 } 00:21:25.828 } 00:21:25.828 Got JSON-RPC error response 00:21:25.828 GoRPCClient: error on JSON-RPC call 00:21:25.828 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:26.089 [2024-11-27 06:23:45.396788] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:26.089 [2024-11-27 06:23:45.397001] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:21:26.089 2024/11/27 06:23:45 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:21:26.089 request: 00:21:26.089 { 00:21:26.089 "method": "bdev_nvme_attach_controller", 00:21:26.089 "params": { 00:21:26.089 "name": "TLSTEST", 00:21:26.089 "trtype": "tcp", 00:21:26.089 "traddr": "10.0.0.3", 00:21:26.089 "adrfam": "ipv4", 00:21:26.089 "trsvcid": "4420", 00:21:26.089 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:26.089 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:26.089 "prchk_reftag": false, 00:21:26.089 "prchk_guard": false, 00:21:26.089 "hdgst": false, 00:21:26.089 "ddgst": false, 00:21:26.089 "psk": "key0", 00:21:26.089 "allow_unrecognized_csi": false 00:21:26.090 } 00:21:26.090 } 00:21:26.090 Got JSON-RPC error response 00:21:26.090 GoRPCClient: error on JSON-RPC call 00:21:26.090 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 99181 00:21:26.090 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 99181 ']' 00:21:26.090 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 99181 00:21:26.090 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:26.090 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:26.090 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99181 00:21:26.090 killing process with pid 99181 00:21:26.090 Received shutdown signal, test time was about 10.000000 seconds 00:21:26.090 00:21:26.090 Latency(us) 00:21:26.090 [2024-11-27T06:23:45.620Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.090 [2024-11-27T06:23:45.620Z] =================================================================================================================== 00:21:26.090 [2024-11-27T06:23:45.620Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:26.090 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:26.090 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:26.090 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99181' 00:21:26.090 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 99181 00:21:26.090 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 99181 00:21:26.090 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:26.090 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:26.090 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:26.090 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:26.090 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:26.090 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 98529 00:21:26.090 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 98529 ']' 00:21:26.090 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 98529 00:21:26.090 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:26.090 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:26.090 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98529 00:21:26.090 killing process with pid 98529 00:21:26.090 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:26.090 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:26.090 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98529' 00:21:26.090 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 98529 00:21:26.090 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 98529 00:21:26.352 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:26.352 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:26.352 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:21:26.352 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:26.352 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:26.352 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:21:26.352 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:21:26.352 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:26.352 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:21:26.352 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.6fiugU9PEv 00:21:26.352 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:26.352 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.6fiugU9PEv 00:21:26.352 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:21:26.352 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:26.352 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:26.352 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.352 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=99249 00:21:26.352 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 99249 00:21:26.352 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:26.352 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 99249 ']' 00:21:26.352 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:26.352 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:26.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:26.352 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:26.352 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:26.352 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.352 [2024-11-27 06:23:45.743475] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:21:26.352 [2024-11-27 06:23:45.743873] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:26.352 [2024-11-27 06:23:45.858709] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:26.614 [2024-11-27 06:23:45.887038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.614 [2024-11-27 06:23:45.904673] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:26.614 [2024-11-27 06:23:45.904711] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:26.614 [2024-11-27 06:23:45.904717] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:26.614 [2024-11-27 06:23:45.904721] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:26.614 [2024-11-27 06:23:45.904726] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:26.614 [2024-11-27 06:23:45.904983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.185 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:27.185 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:27.185 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:27.185 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:27.185 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:27.185 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:27.185 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.6fiugU9PEv 00:21:27.185 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.6fiugU9PEv 00:21:27.185 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:27.445 [2024-11-27 06:23:46.847054] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:27.445 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:27.707 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:21:27.968 [2024-11-27 06:23:47.251097] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:27.968 [2024-11-27 06:23:47.251254] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:27.968 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:27.968 malloc0 00:21:27.968 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:28.230 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.6fiugU9PEv 00:21:28.491 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:28.752 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6fiugU9PEv 00:21:28.752 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:28.753 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:28.753 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:28.753 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.6fiugU9PEv 00:21:28.753 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:28.753 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:28.753 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99352 00:21:28.753 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:28.753 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99352 /var/tmp/bdevperf.sock 00:21:28.753 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 99352 ']' 00:21:28.753 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:28.753 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:28.753 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:28.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:28.753 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:28.753 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:28.753 [2024-11-27 06:23:48.138327] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:21:28.753 [2024-11-27 06:23:48.138669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99352 ] 00:21:28.753 [2024-11-27 06:23:48.252539] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:28.753 [2024-11-27 06:23:48.279772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.014 [2024-11-27 06:23:48.298682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:29.608 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:29.608 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:29.608 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6fiugU9PEv 00:21:29.881 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:30.144 [2024-11-27 06:23:49.415990] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:30.144 TLSTESTn1 00:21:30.144 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:30.144 Running I/O for 10 seconds... 00:21:32.475 6274.00 IOPS, 24.51 MiB/s [2024-11-27T06:23:52.949Z] 6562.50 IOPS, 25.63 MiB/s [2024-11-27T06:23:53.891Z] 6719.33 IOPS, 26.25 MiB/s [2024-11-27T06:23:54.835Z] 6801.50 IOPS, 26.57 MiB/s [2024-11-27T06:23:55.775Z] 6842.40 IOPS, 26.73 MiB/s [2024-11-27T06:23:56.716Z] 6868.17 IOPS, 26.83 MiB/s [2024-11-27T06:23:57.657Z] 6891.43 IOPS, 26.92 MiB/s [2024-11-27T06:23:58.600Z] 6911.00 IOPS, 27.00 MiB/s [2024-11-27T06:23:59.986Z] 6932.44 IOPS, 27.08 MiB/s [2024-11-27T06:23:59.986Z] 6962.00 IOPS, 27.20 MiB/s 00:21:40.456 Latency(us) 00:21:40.456 [2024-11-27T06:23:59.986Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:40.456 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:40.456 Verification LBA range: start 0x0 length 0x2000 00:21:40.456 TLSTESTn1 : 10.01 6967.60 27.22 0.00 0.00 18340.54 3680.10 18350.08 00:21:40.456 [2024-11-27T06:23:59.986Z] =================================================================================================================== 00:21:40.456 [2024-11-27T06:23:59.986Z] Total : 6967.60 27.22 0.00 0.00 18340.54 3680.10 18350.08 00:21:40.456 { 00:21:40.456 "results": [ 00:21:40.456 { 00:21:40.456 "job": "TLSTESTn1", 00:21:40.456 "core_mask": "0x4", 00:21:40.456 "workload": "verify", 00:21:40.456 "status": "finished", 00:21:40.456 "verify_range": { 00:21:40.456 "start": 0, 00:21:40.456 "length": 8192 00:21:40.456 }, 00:21:40.456 "queue_depth": 128, 00:21:40.456 "io_size": 4096, 00:21:40.456 "runtime": 10.009754, 00:21:40.456 "iops": 6967.603799254208, 00:21:40.456 "mibps": 27.21720234083675, 00:21:40.456 "io_failed": 0, 00:21:40.456 "io_timeout": 0, 00:21:40.456 "avg_latency_us": 18340.538604831738, 00:21:40.456 "min_latency_us": 3680.0984615384614, 00:21:40.456 "max_latency_us": 18350.08 00:21:40.456 } 00:21:40.456 ], 00:21:40.456 "core_count": 1 00:21:40.456 } 00:21:40.456 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:40.456 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 99352 00:21:40.456 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 99352 ']' 00:21:40.456 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 99352 00:21:40.456 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:40.456 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:40.456 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99352 00:21:40.456 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:40.456 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:40.456 killing process with pid 99352 00:21:40.456 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99352' 00:21:40.456 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 99352 00:21:40.456 Received shutdown signal, test time was about 10.000000 seconds 00:21:40.456 00:21:40.456 Latency(us) 00:21:40.456 [2024-11-27T06:23:59.986Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:40.456 [2024-11-27T06:23:59.986Z] =================================================================================================================== 00:21:40.456 [2024-11-27T06:23:59.986Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:40.456 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 99352 00:21:40.456 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.6fiugU9PEv 00:21:40.456 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6fiugU9PEv 00:21:40.456 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:40.456 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6fiugU9PEv 00:21:40.456 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:40.456 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.456 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:40.456 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.456 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6fiugU9PEv 00:21:40.456 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:40.456 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:40.456 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:40.456 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.6fiugU9PEv 00:21:40.456 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:40.456 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99506 00:21:40.456 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:40.456 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99506 /var/tmp/bdevperf.sock 00:21:40.456 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:40.456 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 99506 ']' 00:21:40.456 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:40.456 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:40.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:40.456 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:40.456 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:40.456 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.456 [2024-11-27 06:23:59.791871] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:21:40.456 [2024-11-27 06:23:59.791961] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99506 ] 00:21:40.456 [2024-11-27 06:23:59.906440] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:40.456 [2024-11-27 06:23:59.933679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.456 [2024-11-27 06:23:59.951041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.440 06:24:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:41.440 06:24:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:41.440 06:24:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6fiugU9PEv 00:21:41.440 [2024-11-27 06:24:00.863886] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.6fiugU9PEv': 0100666 00:21:41.440 [2024-11-27 06:24:00.863916] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:41.440 2024/11/27 06:24:00 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.6fiugU9PEv], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:21:41.440 request: 00:21:41.440 { 00:21:41.440 "method": "keyring_file_add_key", 00:21:41.440 "params": { 00:21:41.440 "name": "key0", 00:21:41.440 "path": "/tmp/tmp.6fiugU9PEv" 00:21:41.440 } 00:21:41.440 } 00:21:41.440 Got JSON-RPC error response 00:21:41.440 GoRPCClient: error on JSON-RPC call 00:21:41.440 06:24:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:41.702 [2024-11-27 06:24:01.067995] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:41.702 [2024-11-27 06:24:01.068029] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:21:41.702 2024/11/27 06:24:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:21:41.702 request: 00:21:41.702 { 00:21:41.702 "method": "bdev_nvme_attach_controller", 00:21:41.702 "params": { 00:21:41.702 "name": "TLSTEST", 00:21:41.702 "trtype": "tcp", 00:21:41.702 "traddr": "10.0.0.3", 00:21:41.702 "adrfam": "ipv4", 00:21:41.702 "trsvcid": "4420", 00:21:41.702 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:41.702 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:41.702 "prchk_reftag": false, 00:21:41.702 "prchk_guard": false, 00:21:41.702 "hdgst": false, 00:21:41.702 "ddgst": false, 00:21:41.702 "psk": "key0", 00:21:41.702 "allow_unrecognized_csi": false 00:21:41.702 } 00:21:41.702 } 00:21:41.702 Got JSON-RPC error response 00:21:41.702 GoRPCClient: error on JSON-RPC call 00:21:41.702 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 99506 00:21:41.702 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 99506 ']' 00:21:41.702 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 99506 00:21:41.702 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:41.702 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:41.702 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99506 00:21:41.702 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:41.702 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:41.702 killing process with pid 99506 00:21:41.702 Received shutdown signal, test time was about 10.000000 seconds 00:21:41.702 00:21:41.702 Latency(us) 00:21:41.702 [2024-11-27T06:24:01.232Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.702 [2024-11-27T06:24:01.232Z] =================================================================================================================== 00:21:41.702 [2024-11-27T06:24:01.232Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:41.702 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99506' 00:21:41.702 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 99506 00:21:41.702 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 99506 00:21:41.702 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:41.702 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:41.702 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:41.702 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:41.702 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:41.702 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 99249 00:21:41.702 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 99249 ']' 00:21:41.702 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 99249 00:21:41.702 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:41.702 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:41.702 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99249 00:21:41.702 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:41.702 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:41.702 killing process with pid 99249 00:21:41.702 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99249' 00:21:41.702 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 99249 00:21:41.702 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 99249 00:21:41.965 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:21:41.965 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:41.965 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:41.965 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:41.965 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=99570 00:21:41.965 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 99570 00:21:41.965 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:41.965 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 99570 ']' 00:21:41.965 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.965 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:41.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.965 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.965 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:41.965 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:41.965 [2024-11-27 06:24:01.378886] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:21:41.965 [2024-11-27 06:24:01.378956] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.965 [2024-11-27 06:24:01.493762] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:42.226 [2024-11-27 06:24:01.509732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.226 [2024-11-27 06:24:01.526375] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.226 [2024-11-27 06:24:01.526411] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.226 [2024-11-27 06:24:01.526415] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:42.226 [2024-11-27 06:24:01.526419] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:42.226 [2024-11-27 06:24:01.526423] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.226 [2024-11-27 06:24:01.526632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.798 06:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:42.798 06:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:42.798 06:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:42.798 06:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:42.798 06:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:42.798 06:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:42.798 06:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.6fiugU9PEv 00:21:42.798 06:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:42.798 06:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.6fiugU9PEv 00:21:42.798 06:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:21:42.798 06:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:42.798 06:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:21:42.798 06:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:42.798 06:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.6fiugU9PEv 00:21:42.798 06:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.6fiugU9PEv 00:21:42.798 06:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:43.057 [2024-11-27 06:24:02.444639] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:43.057 06:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:43.315 06:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:21:43.572 [2024-11-27 06:24:02.852696] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:43.572 [2024-11-27 06:24:02.852843] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:43.572 06:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:43.572 malloc0 00:21:43.572 06:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:43.831 06:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.6fiugU9PEv 00:21:44.089 [2024-11-27 06:24:03.462575] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.6fiugU9PEv': 0100666 00:21:44.089 [2024-11-27 06:24:03.462604] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:44.089 2024/11/27 06:24:03 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.6fiugU9PEv], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:21:44.089 request: 00:21:44.089 { 00:21:44.089 "method": "keyring_file_add_key", 00:21:44.089 "params": { 00:21:44.089 "name": "key0", 00:21:44.089 "path": "/tmp/tmp.6fiugU9PEv" 00:21:44.089 } 00:21:44.089 } 00:21:44.089 Got JSON-RPC error response 00:21:44.089 GoRPCClient: error on JSON-RPC call 00:21:44.089 06:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:44.347 [2024-11-27 06:24:03.670630] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:21:44.347 [2024-11-27 06:24:03.670671] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:44.347 2024/11/27 06:24:03 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:key0], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:21:44.347 request: 00:21:44.347 { 00:21:44.347 "method": "nvmf_subsystem_add_host", 00:21:44.347 "params": { 00:21:44.347 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.347 "host": "nqn.2016-06.io.spdk:host1", 00:21:44.347 "psk": "key0" 00:21:44.347 } 00:21:44.347 } 00:21:44.347 Got JSON-RPC error response 00:21:44.347 GoRPCClient: error on JSON-RPC call 00:21:44.347 06:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:44.347 06:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:44.347 06:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:44.347 06:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:44.347 06:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 99570 00:21:44.347 06:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 99570 ']' 00:21:44.347 06:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 99570 00:21:44.347 06:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:44.347 06:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:44.347 06:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99570 00:21:44.347 06:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:44.347 killing process with pid 99570 00:21:44.347 06:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:44.347 06:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99570' 00:21:44.347 06:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 99570 00:21:44.347 06:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 99570 00:21:44.347 06:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.6fiugU9PEv 00:21:44.347 06:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:21:44.347 06:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:44.347 06:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:44.347 06:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.347 06:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=99687 00:21:44.347 06:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 99687 00:21:44.347 06:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 99687 ']' 00:21:44.347 06:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.347 06:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:44.347 06:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:44.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.347 06:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.347 06:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:44.347 06:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.347 [2024-11-27 06:24:03.866385] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:21:44.347 [2024-11-27 06:24:03.866453] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.604 [2024-11-27 06:24:03.981782] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:44.604 [2024-11-27 06:24:04.008845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.604 [2024-11-27 06:24:04.024359] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:44.604 [2024-11-27 06:24:04.024392] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:44.604 [2024-11-27 06:24:04.024397] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:44.605 [2024-11-27 06:24:04.024402] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:44.605 [2024-11-27 06:24:04.024406] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:44.605 [2024-11-27 06:24:04.024609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:45.539 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:45.539 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:45.539 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:45.539 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:45.539 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:45.539 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:45.539 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.6fiugU9PEv 00:21:45.539 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.6fiugU9PEv 00:21:45.539 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:45.539 [2024-11-27 06:24:04.964214] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:45.539 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:45.797 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:21:46.056 [2024-11-27 06:24:05.392257] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:46.056 [2024-11-27 06:24:05.392390] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:46.056 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:46.313 malloc0 00:21:46.313 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:46.570 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.6fiugU9PEv 00:21:46.829 06:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:46.829 06:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:46.829 06:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=99797 00:21:46.829 06:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:46.829 06:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 99797 /var/tmp/bdevperf.sock 00:21:46.829 06:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 99797 ']' 00:21:46.829 06:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:46.829 06:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:46.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:46.829 06:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:46.829 06:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:46.829 06:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:46.829 [2024-11-27 06:24:06.354974] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:21:46.829 [2024-11-27 06:24:06.355349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99797 ] 00:21:47.087 [2024-11-27 06:24:06.468830] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:47.087 [2024-11-27 06:24:06.489605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.087 [2024-11-27 06:24:06.509081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:47.087 06:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:47.087 06:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:47.087 06:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6fiugU9PEv 00:21:47.345 06:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:47.603 [2024-11-27 06:24:07.003029] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:47.603 TLSTESTn1 00:21:47.603 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:21:47.862 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:21:47.862 "subsystems": [ 00:21:47.862 { 00:21:47.862 "subsystem": "keyring", 00:21:47.862 "config": [ 00:21:47.862 { 00:21:47.862 "method": "keyring_file_add_key", 00:21:47.862 "params": { 00:21:47.862 "name": "key0", 00:21:47.862 "path": "/tmp/tmp.6fiugU9PEv" 00:21:47.862 } 00:21:47.862 } 00:21:47.862 ] 00:21:47.862 }, 00:21:47.862 { 00:21:47.862 "subsystem": "iobuf", 00:21:47.862 "config": [ 00:21:47.862 { 00:21:47.862 "method": "iobuf_set_options", 00:21:47.862 "params": { 00:21:47.862 "enable_numa": false, 00:21:47.862 "large_bufsize": 135168, 00:21:47.862 "large_pool_count": 1024, 00:21:47.862 "small_bufsize": 8192, 00:21:47.862 "small_pool_count": 8192 00:21:47.862 } 00:21:47.862 } 00:21:47.862 ] 00:21:47.862 }, 00:21:47.862 { 00:21:47.862 "subsystem": "sock", 00:21:47.862 "config": [ 00:21:47.862 { 00:21:47.862 "method": "sock_set_default_impl", 00:21:47.862 "params": { 00:21:47.862 "impl_name": "posix" 00:21:47.862 } 00:21:47.862 }, 00:21:47.862 { 00:21:47.862 "method": "sock_impl_set_options", 00:21:47.862 "params": { 00:21:47.862 "enable_ktls": false, 00:21:47.862 "enable_placement_id": 0, 00:21:47.862 "enable_quickack": false, 00:21:47.862 "enable_recv_pipe": true, 00:21:47.862 "enable_zerocopy_send_client": false, 00:21:47.862 "enable_zerocopy_send_server": true, 00:21:47.862 "impl_name": "ssl", 00:21:47.862 "recv_buf_size": 4096, 00:21:47.862 "send_buf_size": 4096, 00:21:47.862 "tls_version": 0, 00:21:47.862 "zerocopy_threshold": 0 00:21:47.862 } 00:21:47.862 }, 00:21:47.862 { 00:21:47.862 "method": "sock_impl_set_options", 00:21:47.862 "params": { 00:21:47.862 "enable_ktls": false, 00:21:47.862 "enable_placement_id": 0, 00:21:47.862 "enable_quickack": false, 00:21:47.862 "enable_recv_pipe": true, 00:21:47.862 "enable_zerocopy_send_client": false, 00:21:47.862 "enable_zerocopy_send_server": true, 00:21:47.862 "impl_name": "posix", 00:21:47.862 "recv_buf_size": 2097152, 00:21:47.862 "send_buf_size": 2097152, 00:21:47.862 "tls_version": 0, 00:21:47.862 "zerocopy_threshold": 0 00:21:47.862 } 00:21:47.862 } 00:21:47.862 ] 00:21:47.862 }, 00:21:47.862 { 00:21:47.862 "subsystem": "vmd", 00:21:47.862 "config": [] 00:21:47.862 }, 00:21:47.862 { 00:21:47.862 "subsystem": "accel", 00:21:47.862 "config": [ 00:21:47.862 { 00:21:47.863 "method": "accel_set_options", 00:21:47.863 "params": { 00:21:47.863 "buf_count": 2048, 00:21:47.863 "large_cache_size": 16, 00:21:47.863 "sequence_count": 2048, 00:21:47.863 "small_cache_size": 128, 00:21:47.863 "task_count": 2048 00:21:47.863 } 00:21:47.863 } 00:21:47.863 ] 00:21:47.863 }, 00:21:47.863 { 00:21:47.863 "subsystem": "bdev", 00:21:47.863 "config": [ 00:21:47.863 { 00:21:47.863 "method": "bdev_set_options", 00:21:47.863 "params": { 00:21:47.863 "bdev_auto_examine": true, 00:21:47.863 "bdev_io_cache_size": 256, 00:21:47.863 "bdev_io_pool_size": 65535, 00:21:47.863 "iobuf_large_cache_size": 16, 00:21:47.863 "iobuf_small_cache_size": 128 00:21:47.863 } 00:21:47.863 }, 00:21:47.863 { 00:21:47.863 "method": "bdev_raid_set_options", 00:21:47.863 "params": { 00:21:47.863 "process_max_bandwidth_mb_sec": 0, 00:21:47.863 "process_window_size_kb": 1024 00:21:47.863 } 00:21:47.863 }, 00:21:47.863 { 00:21:47.863 "method": "bdev_iscsi_set_options", 00:21:47.863 "params": { 00:21:47.863 "timeout_sec": 30 00:21:47.863 } 00:21:47.863 }, 00:21:47.863 { 00:21:47.863 "method": "bdev_nvme_set_options", 00:21:47.863 "params": { 00:21:47.863 "action_on_timeout": "none", 00:21:47.863 "allow_accel_sequence": false, 00:21:47.863 "arbitration_burst": 0, 00:21:47.863 "bdev_retry_count": 3, 00:21:47.863 "ctrlr_loss_timeout_sec": 0, 00:21:47.863 "delay_cmd_submit": true, 00:21:47.863 "dhchap_dhgroups": [ 00:21:47.863 "null", 00:21:47.863 "ffdhe2048", 00:21:47.863 "ffdhe3072", 00:21:47.863 "ffdhe4096", 00:21:47.863 "ffdhe6144", 00:21:47.863 "ffdhe8192" 00:21:47.863 ], 00:21:47.863 "dhchap_digests": [ 00:21:47.863 "sha256", 00:21:47.863 "sha384", 00:21:47.863 "sha512" 00:21:47.863 ], 00:21:47.863 "disable_auto_failback": false, 00:21:47.863 "fast_io_fail_timeout_sec": 0, 00:21:47.863 "generate_uuids": false, 00:21:47.863 "high_priority_weight": 0, 00:21:47.863 "io_path_stat": false, 00:21:47.863 "io_queue_requests": 0, 00:21:47.863 "keep_alive_timeout_ms": 10000, 00:21:47.863 "low_priority_weight": 0, 00:21:47.863 "medium_priority_weight": 0, 00:21:47.863 "nvme_adminq_poll_period_us": 10000, 00:21:47.863 "nvme_error_stat": false, 00:21:47.863 "nvme_ioq_poll_period_us": 0, 00:21:47.863 "rdma_cm_event_timeout_ms": 0, 00:21:47.863 "rdma_max_cq_size": 0, 00:21:47.863 "rdma_srq_size": 0, 00:21:47.863 "reconnect_delay_sec": 0, 00:21:47.863 "timeout_admin_us": 0, 00:21:47.863 "timeout_us": 0, 00:21:47.863 "transport_ack_timeout": 0, 00:21:47.863 "transport_retry_count": 4, 00:21:47.863 "transport_tos": 0 00:21:47.863 } 00:21:47.863 }, 00:21:47.863 { 00:21:47.863 "method": "bdev_nvme_set_hotplug", 00:21:47.863 "params": { 00:21:47.863 "enable": false, 00:21:47.863 "period_us": 100000 00:21:47.863 } 00:21:47.863 }, 00:21:47.863 { 00:21:47.863 "method": "bdev_malloc_create", 00:21:47.863 "params": { 00:21:47.863 "block_size": 4096, 00:21:47.863 "dif_is_head_of_md": false, 00:21:47.863 "dif_pi_format": 0, 00:21:47.863 "dif_type": 0, 00:21:47.863 "md_size": 0, 00:21:47.863 "name": "malloc0", 00:21:47.863 "num_blocks": 8192, 00:21:47.863 "optimal_io_boundary": 0, 00:21:47.863 "physical_block_size": 4096, 00:21:47.863 "uuid": "f9787380-8de7-4a3b-afbc-62f4145a8976" 00:21:47.863 } 00:21:47.863 }, 00:21:47.863 { 00:21:47.863 "method": "bdev_wait_for_examine" 00:21:47.863 } 00:21:47.863 ] 00:21:47.863 }, 00:21:47.863 { 00:21:47.863 "subsystem": "nbd", 00:21:47.863 "config": [] 00:21:47.863 }, 00:21:47.863 { 00:21:47.863 "subsystem": "scheduler", 00:21:47.863 "config": [ 00:21:47.863 { 00:21:47.863 "method": "framework_set_scheduler", 00:21:47.863 "params": { 00:21:47.863 "name": "static" 00:21:47.863 } 00:21:47.863 } 00:21:47.863 ] 00:21:47.863 }, 00:21:47.863 { 00:21:47.863 "subsystem": "nvmf", 00:21:47.863 "config": [ 00:21:47.863 { 00:21:47.863 "method": "nvmf_set_config", 00:21:47.863 "params": { 00:21:47.863 "admin_cmd_passthru": { 00:21:47.863 "identify_ctrlr": false 00:21:47.863 }, 00:21:47.863 "dhchap_dhgroups": [ 00:21:47.863 "null", 00:21:47.863 "ffdhe2048", 00:21:47.863 "ffdhe3072", 00:21:47.863 "ffdhe4096", 00:21:47.863 "ffdhe6144", 00:21:47.863 "ffdhe8192" 00:21:47.863 ], 00:21:47.863 "dhchap_digests": [ 00:21:47.863 "sha256", 00:21:47.863 "sha384", 00:21:47.863 "sha512" 00:21:47.863 ], 00:21:47.863 "discovery_filter": "match_any" 00:21:47.863 } 00:21:47.863 }, 00:21:47.863 { 00:21:47.863 "method": "nvmf_set_max_subsystems", 00:21:47.863 "params": { 00:21:47.863 "max_subsystems": 1024 00:21:47.863 } 00:21:47.863 }, 00:21:47.863 { 00:21:47.863 "method": "nvmf_set_crdt", 00:21:47.863 "params": { 00:21:47.863 "crdt1": 0, 00:21:47.863 "crdt2": 0, 00:21:47.863 "crdt3": 0 00:21:47.863 } 00:21:47.863 }, 00:21:47.863 { 00:21:47.863 "method": "nvmf_create_transport", 00:21:47.863 "params": { 00:21:47.863 "abort_timeout_sec": 1, 00:21:47.863 "ack_timeout": 0, 00:21:47.863 "buf_cache_size": 4294967295, 00:21:47.863 "c2h_success": false, 00:21:47.863 "data_wr_pool_size": 0, 00:21:47.863 "dif_insert_or_strip": false, 00:21:47.863 "in_capsule_data_size": 4096, 00:21:47.863 "io_unit_size": 131072, 00:21:47.863 "max_aq_depth": 128, 00:21:47.863 "max_io_qpairs_per_ctrlr": 127, 00:21:47.863 "max_io_size": 131072, 00:21:47.863 "max_queue_depth": 128, 00:21:47.863 "num_shared_buffers": 511, 00:21:47.863 "sock_priority": 0, 00:21:47.863 "trtype": "TCP", 00:21:47.863 "zcopy": false 00:21:47.863 } 00:21:47.863 }, 00:21:47.863 { 00:21:47.863 "method": "nvmf_create_subsystem", 00:21:47.863 "params": { 00:21:47.863 "allow_any_host": false, 00:21:47.863 "ana_reporting": false, 00:21:47.863 "max_cntlid": 65519, 00:21:47.863 "max_namespaces": 10, 00:21:47.863 "min_cntlid": 1, 00:21:47.863 "model_number": "SPDK bdev Controller", 00:21:47.863 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:47.863 "serial_number": "SPDK00000000000001" 00:21:47.863 } 00:21:47.863 }, 00:21:47.863 { 00:21:47.863 "method": "nvmf_subsystem_add_host", 00:21:47.863 "params": { 00:21:47.863 "host": "nqn.2016-06.io.spdk:host1", 00:21:47.863 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:47.863 "psk": "key0" 00:21:47.863 } 00:21:47.863 }, 00:21:47.863 { 00:21:47.863 "method": "nvmf_subsystem_add_ns", 00:21:47.863 "params": { 00:21:47.863 "namespace": { 00:21:47.863 "bdev_name": "malloc0", 00:21:47.863 "nguid": "F97873808DE74A3BAFBC62F4145A8976", 00:21:47.863 "no_auto_visible": false, 00:21:47.863 "nsid": 1, 00:21:47.863 "uuid": "f9787380-8de7-4a3b-afbc-62f4145a8976" 00:21:47.863 }, 00:21:47.863 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:21:47.863 } 00:21:47.863 }, 00:21:47.863 { 00:21:47.863 "method": "nvmf_subsystem_add_listener", 00:21:47.863 "params": { 00:21:47.863 "listen_address": { 00:21:47.863 "adrfam": "IPv4", 00:21:47.863 "traddr": "10.0.0.3", 00:21:47.863 "trsvcid": "4420", 00:21:47.863 "trtype": "TCP" 00:21:47.863 }, 00:21:47.863 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:47.863 "secure_channel": true 00:21:47.863 } 00:21:47.863 } 00:21:47.863 ] 00:21:47.863 } 00:21:47.863 ] 00:21:47.863 }' 00:21:47.863 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:48.122 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:21:48.122 "subsystems": [ 00:21:48.122 { 00:21:48.122 "subsystem": "keyring", 00:21:48.122 "config": [ 00:21:48.122 { 00:21:48.122 "method": "keyring_file_add_key", 00:21:48.122 "params": { 00:21:48.122 "name": "key0", 00:21:48.122 "path": "/tmp/tmp.6fiugU9PEv" 00:21:48.122 } 00:21:48.122 } 00:21:48.122 ] 00:21:48.122 }, 00:21:48.122 { 00:21:48.122 "subsystem": "iobuf", 00:21:48.122 "config": [ 00:21:48.122 { 00:21:48.122 "method": "iobuf_set_options", 00:21:48.122 "params": { 00:21:48.122 "enable_numa": false, 00:21:48.122 "large_bufsize": 135168, 00:21:48.122 "large_pool_count": 1024, 00:21:48.122 "small_bufsize": 8192, 00:21:48.122 "small_pool_count": 8192 00:21:48.122 } 00:21:48.122 } 00:21:48.122 ] 00:21:48.122 }, 00:21:48.122 { 00:21:48.122 "subsystem": "sock", 00:21:48.122 "config": [ 00:21:48.122 { 00:21:48.122 "method": "sock_set_default_impl", 00:21:48.122 "params": { 00:21:48.122 "impl_name": "posix" 00:21:48.122 } 00:21:48.122 }, 00:21:48.122 { 00:21:48.122 "method": "sock_impl_set_options", 00:21:48.122 "params": { 00:21:48.122 "enable_ktls": false, 00:21:48.122 "enable_placement_id": 0, 00:21:48.122 "enable_quickack": false, 00:21:48.122 "enable_recv_pipe": true, 00:21:48.122 "enable_zerocopy_send_client": false, 00:21:48.122 "enable_zerocopy_send_server": true, 00:21:48.122 "impl_name": "ssl", 00:21:48.122 "recv_buf_size": 4096, 00:21:48.122 "send_buf_size": 4096, 00:21:48.122 "tls_version": 0, 00:21:48.122 "zerocopy_threshold": 0 00:21:48.122 } 00:21:48.122 }, 00:21:48.122 { 00:21:48.122 "method": "sock_impl_set_options", 00:21:48.122 "params": { 00:21:48.122 "enable_ktls": false, 00:21:48.122 "enable_placement_id": 0, 00:21:48.122 "enable_quickack": false, 00:21:48.123 "enable_recv_pipe": true, 00:21:48.123 "enable_zerocopy_send_client": false, 00:21:48.123 "enable_zerocopy_send_server": true, 00:21:48.123 "impl_name": "posix", 00:21:48.123 "recv_buf_size": 2097152, 00:21:48.123 "send_buf_size": 2097152, 00:21:48.123 "tls_version": 0, 00:21:48.123 "zerocopy_threshold": 0 00:21:48.123 } 00:21:48.123 } 00:21:48.123 ] 00:21:48.123 }, 00:21:48.123 { 00:21:48.123 "subsystem": "vmd", 00:21:48.123 "config": [] 00:21:48.123 }, 00:21:48.123 { 00:21:48.123 "subsystem": "accel", 00:21:48.123 "config": [ 00:21:48.123 { 00:21:48.123 "method": "accel_set_options", 00:21:48.123 "params": { 00:21:48.123 "buf_count": 2048, 00:21:48.123 "large_cache_size": 16, 00:21:48.123 "sequence_count": 2048, 00:21:48.123 "small_cache_size": 128, 00:21:48.123 "task_count": 2048 00:21:48.123 } 00:21:48.123 } 00:21:48.123 ] 00:21:48.123 }, 00:21:48.123 { 00:21:48.123 "subsystem": "bdev", 00:21:48.123 "config": [ 00:21:48.123 { 00:21:48.123 "method": "bdev_set_options", 00:21:48.123 "params": { 00:21:48.123 "bdev_auto_examine": true, 00:21:48.123 "bdev_io_cache_size": 256, 00:21:48.123 "bdev_io_pool_size": 65535, 00:21:48.123 "iobuf_large_cache_size": 16, 00:21:48.123 "iobuf_small_cache_size": 128 00:21:48.123 } 00:21:48.123 }, 00:21:48.123 { 00:21:48.123 "method": "bdev_raid_set_options", 00:21:48.123 "params": { 00:21:48.123 "process_max_bandwidth_mb_sec": 0, 00:21:48.123 "process_window_size_kb": 1024 00:21:48.123 } 00:21:48.123 }, 00:21:48.123 { 00:21:48.123 "method": "bdev_iscsi_set_options", 00:21:48.123 "params": { 00:21:48.123 "timeout_sec": 30 00:21:48.123 } 00:21:48.123 }, 00:21:48.123 { 00:21:48.123 "method": "bdev_nvme_set_options", 00:21:48.123 "params": { 00:21:48.123 "action_on_timeout": "none", 00:21:48.123 "allow_accel_sequence": false, 00:21:48.123 "arbitration_burst": 0, 00:21:48.123 "bdev_retry_count": 3, 00:21:48.123 "ctrlr_loss_timeout_sec": 0, 00:21:48.123 "delay_cmd_submit": true, 00:21:48.123 "dhchap_dhgroups": [ 00:21:48.123 "null", 00:21:48.123 "ffdhe2048", 00:21:48.123 "ffdhe3072", 00:21:48.123 "ffdhe4096", 00:21:48.123 "ffdhe6144", 00:21:48.123 "ffdhe8192" 00:21:48.123 ], 00:21:48.123 "dhchap_digests": [ 00:21:48.123 "sha256", 00:21:48.123 "sha384", 00:21:48.123 "sha512" 00:21:48.123 ], 00:21:48.123 "disable_auto_failback": false, 00:21:48.123 "fast_io_fail_timeout_sec": 0, 00:21:48.123 "generate_uuids": false, 00:21:48.123 "high_priority_weight": 0, 00:21:48.123 "io_path_stat": false, 00:21:48.123 "io_queue_requests": 512, 00:21:48.123 "keep_alive_timeout_ms": 10000, 00:21:48.123 "low_priority_weight": 0, 00:21:48.123 "medium_priority_weight": 0, 00:21:48.123 "nvme_adminq_poll_period_us": 10000, 00:21:48.123 "nvme_error_stat": false, 00:21:48.123 "nvme_ioq_poll_period_us": 0, 00:21:48.123 "rdma_cm_event_timeout_ms": 0, 00:21:48.123 "rdma_max_cq_size": 0, 00:21:48.123 "rdma_srq_size": 0, 00:21:48.123 "reconnect_delay_sec": 0, 00:21:48.123 "timeout_admin_us": 0, 00:21:48.123 "timeout_us": 0, 00:21:48.123 "transport_ack_timeout": 0, 00:21:48.123 "transport_retry_count": 4, 00:21:48.123 "transport_tos": 0 00:21:48.123 } 00:21:48.123 }, 00:21:48.123 { 00:21:48.123 "method": "bdev_nvme_attach_controller", 00:21:48.123 "params": { 00:21:48.123 "adrfam": "IPv4", 00:21:48.123 "ctrlr_loss_timeout_sec": 0, 00:21:48.123 "ddgst": false, 00:21:48.123 "fast_io_fail_timeout_sec": 0, 00:21:48.123 "hdgst": false, 00:21:48.123 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:48.123 "multipath": "multipath", 00:21:48.123 "name": "TLSTEST", 00:21:48.123 "prchk_guard": false, 00:21:48.123 "prchk_reftag": false, 00:21:48.123 "psk": "key0", 00:21:48.123 "reconnect_delay_sec": 0, 00:21:48.123 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.123 "traddr": "10.0.0.3", 00:21:48.123 "trsvcid": "4420", 00:21:48.123 "trtype": "TCP" 00:21:48.123 } 00:21:48.123 }, 00:21:48.123 { 00:21:48.123 "method": "bdev_nvme_set_hotplug", 00:21:48.123 "params": { 00:21:48.123 "enable": false, 00:21:48.123 "period_us": 100000 00:21:48.123 } 00:21:48.123 }, 00:21:48.123 { 00:21:48.123 "method": "bdev_wait_for_examine" 00:21:48.123 } 00:21:48.123 ] 00:21:48.123 }, 00:21:48.123 { 00:21:48.123 "subsystem": "nbd", 00:21:48.123 "config": [] 00:21:48.123 } 00:21:48.123 ] 00:21:48.123 }' 00:21:48.123 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 99797 00:21:48.123 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 99797 ']' 00:21:48.123 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 99797 00:21:48.123 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:48.123 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:48.123 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99797 00:21:48.382 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:48.382 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:48.382 killing process with pid 99797 00:21:48.382 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99797' 00:21:48.382 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 99797 00:21:48.382 Received shutdown signal, test time was about 10.000000 seconds 00:21:48.382 00:21:48.383 Latency(us) 00:21:48.383 [2024-11-27T06:24:07.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.383 [2024-11-27T06:24:07.913Z] =================================================================================================================== 00:21:48.383 [2024-11-27T06:24:07.913Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:48.383 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 99797 00:21:48.383 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 99687 00:21:48.383 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 99687 ']' 00:21:48.383 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 99687 00:21:48.383 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:48.383 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:48.383 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99687 00:21:48.383 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:48.383 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:48.383 killing process with pid 99687 00:21:48.383 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99687' 00:21:48.383 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 99687 00:21:48.383 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 99687 00:21:48.383 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:48.383 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:48.383 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:48.383 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:21:48.383 "subsystems": [ 00:21:48.383 { 00:21:48.383 "subsystem": "keyring", 00:21:48.383 "config": [ 00:21:48.383 { 00:21:48.383 "method": "keyring_file_add_key", 00:21:48.383 "params": { 00:21:48.383 "name": "key0", 00:21:48.383 "path": "/tmp/tmp.6fiugU9PEv" 00:21:48.383 } 00:21:48.383 } 00:21:48.383 ] 00:21:48.383 }, 00:21:48.383 { 00:21:48.383 "subsystem": "iobuf", 00:21:48.383 "config": [ 00:21:48.383 { 00:21:48.383 "method": "iobuf_set_options", 00:21:48.383 "params": { 00:21:48.383 "enable_numa": false, 00:21:48.383 "large_bufsize": 135168, 00:21:48.383 "large_pool_count": 1024, 00:21:48.383 "small_bufsize": 8192, 00:21:48.383 "small_pool_count": 8192 00:21:48.383 } 00:21:48.383 } 00:21:48.383 ] 00:21:48.383 }, 00:21:48.383 { 00:21:48.383 "subsystem": "sock", 00:21:48.383 "config": [ 00:21:48.383 { 00:21:48.383 "method": "sock_set_default_impl", 00:21:48.383 "params": { 00:21:48.383 "impl_name": "posix" 00:21:48.383 } 00:21:48.383 }, 00:21:48.383 { 00:21:48.383 "method": "sock_impl_set_options", 00:21:48.383 "params": { 00:21:48.383 "enable_ktls": false, 00:21:48.383 "enable_placement_id": 0, 00:21:48.383 "enable_quickack": false, 00:21:48.383 "enable_recv_pipe": true, 00:21:48.383 "enable_zerocopy_send_client": false, 00:21:48.383 "enable_zerocopy_send_server": true, 00:21:48.383 "impl_name": "ssl", 00:21:48.383 "recv_buf_size": 4096, 00:21:48.383 "send_buf_size": 4096, 00:21:48.383 "tls_version": 0, 00:21:48.383 "zerocopy_threshold": 0 00:21:48.383 } 00:21:48.383 }, 00:21:48.383 { 00:21:48.383 "method": "sock_impl_set_options", 00:21:48.383 "params": { 00:21:48.383 "enable_ktls": false, 00:21:48.383 "enable_placement_id": 0, 00:21:48.383 "enable_quickack": false, 00:21:48.383 "enable_recv_pipe": true, 00:21:48.383 "enable_zerocopy_send_client": false, 00:21:48.383 "enable_zerocopy_send_server": true, 00:21:48.383 "impl_name": "posix", 00:21:48.383 "recv_buf_size": 2097152, 00:21:48.383 "send_buf_size": 2097152, 00:21:48.383 "tls_version": 0, 00:21:48.383 "zerocopy_threshold": 0 00:21:48.383 } 00:21:48.383 } 00:21:48.383 ] 00:21:48.383 }, 00:21:48.383 { 00:21:48.383 "subsystem": "vmd", 00:21:48.383 "config": [] 00:21:48.383 }, 00:21:48.383 { 00:21:48.383 "subsystem": "accel", 00:21:48.383 "config": [ 00:21:48.383 { 00:21:48.383 "method": "accel_set_options", 00:21:48.383 "params": { 00:21:48.383 "buf_count": 2048, 00:21:48.383 "large_cache_size": 16, 00:21:48.383 "sequence_count": 2048, 00:21:48.383 "small_cache_size": 128, 00:21:48.383 "task_count": 2048 00:21:48.383 } 00:21:48.383 } 00:21:48.383 ] 00:21:48.383 }, 00:21:48.383 { 00:21:48.383 "subsystem": "bdev", 00:21:48.383 "config": [ 00:21:48.383 { 00:21:48.383 "method": "bdev_set_options", 00:21:48.383 "params": { 00:21:48.383 "bdev_auto_examine": true, 00:21:48.383 "bdev_io_cache_size": 256, 00:21:48.383 "bdev_io_pool_size": 65535, 00:21:48.383 "iobuf_large_cache_size": 16, 00:21:48.383 "iobuf_small_cache_size": 128 00:21:48.383 } 00:21:48.383 }, 00:21:48.383 { 00:21:48.383 "method": "bdev_raid_set_options", 00:21:48.383 "params": { 00:21:48.383 "process_max_bandwidth_mb_sec": 0, 00:21:48.383 "process_window_size_kb": 1024 00:21:48.383 } 00:21:48.383 }, 00:21:48.383 { 00:21:48.383 "method": "bdev_iscsi_set_options", 00:21:48.383 "params": { 00:21:48.383 "timeout_sec": 30 00:21:48.383 } 00:21:48.383 }, 00:21:48.383 { 00:21:48.383 "method": "bdev_nvme_set_options", 00:21:48.383 "params": { 00:21:48.383 "action_on_timeout": "none", 00:21:48.383 "allow_accel_sequence": false, 00:21:48.383 "arbitration_burst": 0, 00:21:48.383 "bdev_retry_count": 3, 00:21:48.383 "ctrlr_loss_timeout_sec": 0, 00:21:48.383 "delay_cmd_submit": true, 00:21:48.383 "dhchap_dhgroups": [ 00:21:48.383 "null", 00:21:48.383 "ffdhe2048", 00:21:48.383 "ffdhe3072", 00:21:48.383 "ffdhe4096", 00:21:48.383 "ffdhe6144", 00:21:48.383 "ffdhe8192" 00:21:48.383 ], 00:21:48.383 "dhchap_digests": [ 00:21:48.383 "sha256", 00:21:48.383 "sha384", 00:21:48.383 "sha512" 00:21:48.383 ], 00:21:48.383 "disable_auto_failback": false, 00:21:48.383 "fast_io_fail_timeout_sec": 0, 00:21:48.383 "generate_uuids": false, 00:21:48.383 "high_priority_weight": 0, 00:21:48.383 "io_path_stat": false, 00:21:48.383 "io_queue_requests": 0, 00:21:48.384 "keep_alive_timeout_ms": 10000, 00:21:48.384 "low_priority_weight": 0, 00:21:48.384 "medium_priority_weight": 0, 00:21:48.384 "nvme_adminq_poll_period_us": 10000, 00:21:48.384 "nvme_error_stat": false, 00:21:48.384 "nvme_ioq_poll_period_us": 0, 00:21:48.384 "rdma_cm_event_timeout_ms": 0, 00:21:48.384 "rdma_max_cq_size": 0, 00:21:48.384 "rdma_srq_size": 0, 00:21:48.384 "reconnect_delay_sec": 0, 00:21:48.384 "timeout_admin_us": 0, 00:21:48.384 "timeout_us": 0, 00:21:48.384 "transport_ack_timeout": 0, 00:21:48.384 "transport_retry_count": 4, 00:21:48.384 "transport_tos": 0 00:21:48.384 } 00:21:48.384 }, 00:21:48.384 { 00:21:48.384 "method": "bdev_nvme_set_hotplug", 00:21:48.384 "params": { 00:21:48.384 "enable": false, 00:21:48.384 "period_us": 100000 00:21:48.384 } 00:21:48.384 }, 00:21:48.384 { 00:21:48.384 "method": "bdev_malloc_create", 00:21:48.384 "params": { 00:21:48.384 "block_size": 4096, 00:21:48.384 "dif_is_head_of_md": false, 00:21:48.384 "dif_pi_format": 0, 00:21:48.384 "dif_type": 0, 00:21:48.384 "md_size": 0, 00:21:48.384 "name": "malloc0", 00:21:48.384 "num_blocks": 8192, 00:21:48.384 "optimal_io_boundary": 0, 00:21:48.384 "physical_block_size": 4096, 00:21:48.384 "uuid": "f9787380-8de7-4a3b-afbc-62f4145a8976" 00:21:48.384 } 00:21:48.384 }, 00:21:48.384 { 00:21:48.384 "method": "bdev_wait_for_examine" 00:21:48.384 } 00:21:48.384 ] 00:21:48.384 }, 00:21:48.384 { 00:21:48.384 "subsystem": "nbd", 00:21:48.384 "config": [] 00:21:48.384 }, 00:21:48.384 { 00:21:48.384 "subsystem": "scheduler", 00:21:48.384 "config": [ 00:21:48.384 { 00:21:48.384 "method": "framework_set_scheduler", 00:21:48.384 "params": { 00:21:48.384 "name": "static" 00:21:48.384 } 00:21:48.384 } 00:21:48.384 ] 00:21:48.384 }, 00:21:48.384 { 00:21:48.384 "subsystem": "nvmf", 00:21:48.384 "config": [ 00:21:48.384 { 00:21:48.384 "method": "nvmf_set_config", 00:21:48.384 "params": { 00:21:48.384 "admin_cmd_passthru": { 00:21:48.384 "identify_ctrlr": false 00:21:48.384 }, 00:21:48.384 "dhchap_dhgroups": [ 00:21:48.384 "null", 00:21:48.384 "ffdhe2048", 00:21:48.384 "ffdhe3072", 00:21:48.384 "ffdhe4096", 00:21:48.384 "ffdhe6144", 00:21:48.384 "ffdhe8192" 00:21:48.384 ], 00:21:48.384 "dhchap_digests": [ 00:21:48.384 "sha256", 00:21:48.384 "sha384", 00:21:48.384 "sha512" 00:21:48.384 ], 00:21:48.384 "discovery_filter": "match_any" 00:21:48.384 } 00:21:48.384 }, 00:21:48.384 { 00:21:48.384 "method": "nvmf_set_max_subsystems", 00:21:48.384 "params": { 00:21:48.384 "max_subsystems": 1024 00:21:48.384 } 00:21:48.384 }, 00:21:48.384 { 00:21:48.384 "method": "nvmf_set_crdt", 00:21:48.384 "params": { 00:21:48.384 "crdt1": 0, 00:21:48.384 "crdt2": 0, 00:21:48.384 "crdt3": 0 00:21:48.384 } 00:21:48.384 }, 00:21:48.384 { 00:21:48.384 "method": "nvmf_create_transport", 00:21:48.384 "params": { 00:21:48.384 "abort_timeout_sec": 1, 00:21:48.384 "ack_timeout": 0, 00:21:48.384 "buf_cache_size": 4294967295, 00:21:48.384 "c2h_success": false, 00:21:48.384 "data_wr_pool_size": 0, 00:21:48.384 "dif_insert_or_strip": false, 00:21:48.384 "in_capsule_data_size": 4096, 00:21:48.384 "io_unit_size": 131072, 00:21:48.384 "max_aq_depth": 128, 00:21:48.384 "max_io_qpairs_per_ctrlr": 127, 00:21:48.384 "max_io_size": 131072, 00:21:48.384 "max_queue_depth": 128, 00:21:48.384 "num_shared_buffers": 511, 00:21:48.384 "sock_priority": 0, 00:21:48.384 "trtype": "TCP", 00:21:48.384 "zcopy": false 00:21:48.384 } 00:21:48.384 }, 00:21:48.384 { 00:21:48.384 "method": "nvmf_create_subsystem", 00:21:48.384 "params": { 00:21:48.384 "allow_any_host": false, 00:21:48.384 "ana_reporting": false, 00:21:48.384 "max_cntlid": 65519, 00:21:48.384 "max_namespaces": 10, 00:21:48.384 "min_cntlid": 1, 00:21:48.384 "model_number": "SPDK bdev Controller", 00:21:48.384 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.384 "serial_number": "SPDK00000000000001" 00:21:48.384 } 00:21:48.384 }, 00:21:48.384 { 00:21:48.384 "method": "nvmf_subsystem_add_host", 00:21:48.384 "params": { 00:21:48.384 "host": "nqn.2016-06.io.spdk:host1", 00:21:48.384 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.384 "psk": "key0" 00:21:48.384 } 00:21:48.384 }, 00:21:48.384 { 00:21:48.384 "method": "nvmf_subsystem_add_ns", 00:21:48.384 "params": { 00:21:48.384 "namespace": { 00:21:48.384 "bdev_name": "malloc0", 00:21:48.384 "nguid": "F97873808DE74A3BAFBC62F4145A8976", 00:21:48.384 "no_auto_visible": false, 00:21:48.384 "nsid": 1, 00:21:48.384 "uuid": "f9787380-8de7-4a3b-afbc-62f4145a8976" 00:21:48.384 }, 00:21:48.384 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:21:48.384 } 00:21:48.384 }, 00:21:48.384 { 00:21:48.384 "method": "nvmf_subsystem_add_listener", 00:21:48.384 "params": { 00:21:48.384 "listen_address": { 00:21:48.384 "adrfam": "IPv4", 00:21:48.384 "traddr": "10.0.0.3", 00:21:48.384 "trsvcid": "4420", 00:21:48.384 "trtype": "TCP" 00:21:48.384 }, 00:21:48.384 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.384 "secure_channel": true 00:21:48.384 } 00:21:48.384 } 00:21:48.384 ] 00:21:48.384 } 00:21:48.384 ] 00:21:48.384 }' 00:21:48.385 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.643 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=99858 00:21:48.643 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:48.643 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 99858 00:21:48.643 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 99858 ']' 00:21:48.643 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.643 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:48.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.643 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.643 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:48.643 06:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.643 [2024-11-27 06:24:07.955577] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:21:48.643 [2024-11-27 06:24:07.955635] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:48.643 [2024-11-27 06:24:08.070564] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:48.643 [2024-11-27 06:24:08.096498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.643 [2024-11-27 06:24:08.113257] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:48.643 [2024-11-27 06:24:08.113290] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:48.643 [2024-11-27 06:24:08.113295] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:48.643 [2024-11-27 06:24:08.113299] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:48.643 [2024-11-27 06:24:08.113303] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:48.643 [2024-11-27 06:24:08.113534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:48.901 [2024-11-27 06:24:08.302650] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:48.901 [2024-11-27 06:24:08.334595] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:48.901 [2024-11-27 06:24:08.334720] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:49.468 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:49.468 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:49.468 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:49.468 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:49.468 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:49.468 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.468 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=99901 00:21:49.468 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 99901 /var/tmp/bdevperf.sock 00:21:49.468 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 99901 ']' 00:21:49.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:49.468 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:49.468 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:49.468 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:49.468 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:49.468 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:49.468 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:49.468 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:21:49.468 "subsystems": [ 00:21:49.468 { 00:21:49.468 "subsystem": "keyring", 00:21:49.468 "config": [ 00:21:49.468 { 00:21:49.468 "method": "keyring_file_add_key", 00:21:49.468 "params": { 00:21:49.468 "name": "key0", 00:21:49.468 "path": "/tmp/tmp.6fiugU9PEv" 00:21:49.468 } 00:21:49.468 } 00:21:49.468 ] 00:21:49.468 }, 00:21:49.468 { 00:21:49.468 "subsystem": "iobuf", 00:21:49.468 "config": [ 00:21:49.468 { 00:21:49.468 "method": "iobuf_set_options", 00:21:49.468 "params": { 00:21:49.468 "enable_numa": false, 00:21:49.468 "large_bufsize": 135168, 00:21:49.468 "large_pool_count": 1024, 00:21:49.468 "small_bufsize": 8192, 00:21:49.468 "small_pool_count": 8192 00:21:49.468 } 00:21:49.468 } 00:21:49.468 ] 00:21:49.468 }, 00:21:49.468 { 00:21:49.468 "subsystem": "sock", 00:21:49.468 "config": [ 00:21:49.468 { 00:21:49.468 "method": "sock_set_default_impl", 00:21:49.468 "params": { 00:21:49.468 "impl_name": "posix" 00:21:49.468 } 00:21:49.468 }, 00:21:49.468 { 00:21:49.468 "method": "sock_impl_set_options", 00:21:49.468 "params": { 00:21:49.468 "enable_ktls": false, 00:21:49.468 "enable_placement_id": 0, 00:21:49.468 "enable_quickack": false, 00:21:49.468 "enable_recv_pipe": true, 00:21:49.468 "enable_zerocopy_send_client": false, 00:21:49.468 "enable_zerocopy_send_server": true, 00:21:49.468 "impl_name": "ssl", 00:21:49.468 "recv_buf_size": 4096, 00:21:49.468 "send_buf_size": 4096, 00:21:49.468 "tls_version": 0, 00:21:49.468 "zerocopy_threshold": 0 00:21:49.468 } 00:21:49.468 }, 00:21:49.468 { 00:21:49.468 "method": "sock_impl_set_options", 00:21:49.468 "params": { 00:21:49.468 "enable_ktls": false, 00:21:49.468 "enable_placement_id": 0, 00:21:49.468 "enable_quickack": false, 00:21:49.468 "enable_recv_pipe": true, 00:21:49.468 "enable_zerocopy_send_client": false, 00:21:49.468 "enable_zerocopy_send_server": true, 00:21:49.468 "impl_name": "posix", 00:21:49.468 "recv_buf_size": 2097152, 00:21:49.468 "send_buf_size": 2097152, 00:21:49.468 "tls_version": 0, 00:21:49.468 "zerocopy_threshold": 0 00:21:49.468 } 00:21:49.468 } 00:21:49.468 ] 00:21:49.468 }, 00:21:49.468 { 00:21:49.468 "subsystem": "vmd", 00:21:49.468 "config": [] 00:21:49.468 }, 00:21:49.468 { 00:21:49.468 "subsystem": "accel", 00:21:49.468 "config": [ 00:21:49.468 { 00:21:49.468 "method": "accel_set_options", 00:21:49.468 "params": { 00:21:49.468 "buf_count": 2048, 00:21:49.468 "large_cache_size": 16, 00:21:49.468 "sequence_count": 2048, 00:21:49.468 "small_cache_size": 128, 00:21:49.468 "task_count": 2048 00:21:49.468 } 00:21:49.468 } 00:21:49.468 ] 00:21:49.468 }, 00:21:49.468 { 00:21:49.468 "subsystem": "bdev", 00:21:49.468 "config": [ 00:21:49.468 { 00:21:49.468 "method": "bdev_set_options", 00:21:49.468 "params": { 00:21:49.468 "bdev_auto_examine": true, 00:21:49.468 "bdev_io_cache_size": 256, 00:21:49.468 "bdev_io_pool_size": 65535, 00:21:49.468 "iobuf_large_cache_size": 16, 00:21:49.469 "iobuf_small_cache_size": 128 00:21:49.469 } 00:21:49.469 }, 00:21:49.469 { 00:21:49.469 "method": "bdev_raid_set_options", 00:21:49.469 "params": { 00:21:49.469 "process_max_bandwidth_mb_sec": 0, 00:21:49.469 "process_window_size_kb": 1024 00:21:49.469 } 00:21:49.469 }, 00:21:49.469 { 00:21:49.469 "method": "bdev_iscsi_set_options", 00:21:49.469 "params": { 00:21:49.469 "timeout_sec": 30 00:21:49.469 } 00:21:49.469 }, 00:21:49.469 { 00:21:49.469 "method": "bdev_nvme_set_options", 00:21:49.469 "params": { 00:21:49.469 "action_on_timeout": "none", 00:21:49.469 "allow_accel_sequence": false, 00:21:49.469 "arbitration_burst": 0, 00:21:49.469 "bdev_retry_count": 3, 00:21:49.469 "ctrlr_loss_timeout_sec": 0, 00:21:49.469 "delay_cmd_submit": true, 00:21:49.469 "dhchap_dhgroups": [ 00:21:49.469 "null", 00:21:49.469 "ffdhe2048", 00:21:49.469 "ffdhe3072", 00:21:49.469 "ffdhe4096", 00:21:49.469 "ffdhe6144", 00:21:49.469 "ffdhe8192" 00:21:49.469 ], 00:21:49.469 "dhchap_digests": [ 00:21:49.469 "sha256", 00:21:49.469 "sha384", 00:21:49.469 "sha512" 00:21:49.469 ], 00:21:49.469 "disable_auto_failback": false, 00:21:49.469 "fast_io_fail_timeout_sec": 0, 00:21:49.469 "generate_uuids": false, 00:21:49.469 "high_priority_weight": 0, 00:21:49.469 "io_path_stat": false, 00:21:49.469 "io_queue_requests": 512, 00:21:49.469 "keep_alive_timeout_ms": 10000, 00:21:49.469 "low_priority_weight": 0, 00:21:49.469 "medium_priority_weight": 0, 00:21:49.469 "nvme_adminq_poll_period_us": 10000, 00:21:49.469 "nvme_error_stat": false, 00:21:49.469 "nvme_ioq_poll_period_us": 0, 00:21:49.469 "rdma_cm_event_timeout_ms": 0, 00:21:49.469 "rdma_max_cq_size": 0, 00:21:49.469 "rdma_srq_size": 0, 00:21:49.469 "reconnect_delay_sec": 0, 00:21:49.469 "timeout_admin_us": 0, 00:21:49.469 "timeout_us": 0, 00:21:49.469 "transport_ack_timeout": 0, 00:21:49.469 "transport_retry_count": 4, 00:21:49.469 "transport_tos": 0 00:21:49.469 } 00:21:49.469 }, 00:21:49.469 { 00:21:49.469 "method": "bdev_nvme_attach_controller", 00:21:49.469 "params": { 00:21:49.469 "adrfam": "IPv4", 00:21:49.469 "ctrlr_loss_timeout_sec": 0, 00:21:49.469 "ddgst": false, 00:21:49.469 "fast_io_fail_timeout_sec": 0, 00:21:49.469 "hdgst": false, 00:21:49.469 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:49.469 "multipath": "multipath", 00:21:49.469 "name": "TLSTEST", 00:21:49.469 "prchk_guard": false, 00:21:49.469 "prchk_reftag": false, 00:21:49.469 "psk": "key0", 00:21:49.469 "reconnect_delay_sec": 0, 00:21:49.469 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:49.469 "traddr": "10.0.0.3", 00:21:49.469 "trsvcid": "4420", 00:21:49.469 "trtype": "TCP" 00:21:49.469 } 00:21:49.469 }, 00:21:49.469 { 00:21:49.469 "method": "bdev_nvme_set_hotplug", 00:21:49.469 "params": { 00:21:49.469 "enable": false, 00:21:49.469 "period_us": 100000 00:21:49.469 } 00:21:49.469 }, 00:21:49.469 { 00:21:49.469 "method": "bdev_wait_for_examine" 00:21:49.469 } 00:21:49.469 ] 00:21:49.469 }, 00:21:49.469 { 00:21:49.469 "subsystem": "nbd", 00:21:49.469 "config": [] 00:21:49.469 } 00:21:49.469 ] 00:21:49.469 }' 00:21:49.469 [2024-11-27 06:24:08.915731] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:21:49.469 [2024-11-27 06:24:08.915795] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99901 ] 00:21:49.727 [2024-11-27 06:24:09.029611] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:49.727 [2024-11-27 06:24:09.054554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.727 [2024-11-27 06:24:09.074481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:49.727 [2024-11-27 06:24:09.210981] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:50.658 06:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:50.658 06:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:50.658 06:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:50.658 Running I/O for 10 seconds... 00:21:52.524 5680.00 IOPS, 22.19 MiB/s [2024-11-27T06:24:12.989Z] 6338.50 IOPS, 24.76 MiB/s [2024-11-27T06:24:14.361Z] 6582.67 IOPS, 25.71 MiB/s [2024-11-27T06:24:15.296Z] 6719.25 IOPS, 26.25 MiB/s [2024-11-27T06:24:16.241Z] 6801.60 IOPS, 26.57 MiB/s [2024-11-27T06:24:17.191Z] 6851.33 IOPS, 26.76 MiB/s [2024-11-27T06:24:18.124Z] 6876.71 IOPS, 26.86 MiB/s [2024-11-27T06:24:19.056Z] 6909.75 IOPS, 26.99 MiB/s [2024-11-27T06:24:19.991Z] 6933.67 IOPS, 27.08 MiB/s [2024-11-27T06:24:19.991Z] 6953.20 IOPS, 27.16 MiB/s 00:22:00.461 Latency(us) 00:22:00.461 [2024-11-27T06:24:19.991Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.461 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:00.461 Verification LBA range: start 0x0 length 0x2000 00:22:00.461 TLSTESTn1 : 10.01 6959.35 27.18 0.00 0.00 18363.24 3352.42 18350.08 00:22:00.461 [2024-11-27T06:24:19.991Z] =================================================================================================================== 00:22:00.461 [2024-11-27T06:24:19.991Z] Total : 6959.35 27.18 0.00 0.00 18363.24 3352.42 18350.08 00:22:00.461 { 00:22:00.461 "results": [ 00:22:00.461 { 00:22:00.461 "job": "TLSTESTn1", 00:22:00.461 "core_mask": "0x4", 00:22:00.461 "workload": "verify", 00:22:00.461 "status": "finished", 00:22:00.461 "verify_range": { 00:22:00.461 "start": 0, 00:22:00.461 "length": 8192 00:22:00.461 }, 00:22:00.461 "queue_depth": 128, 00:22:00.461 "io_size": 4096, 00:22:00.461 "runtime": 10.009271, 00:22:00.461 "iops": 6959.34798847988, 00:22:00.461 "mibps": 27.18495307999953, 00:22:00.461 "io_failed": 0, 00:22:00.461 "io_timeout": 0, 00:22:00.461 "avg_latency_us": 18363.23671635264, 00:22:00.461 "min_latency_us": 3352.4184615384615, 00:22:00.461 "max_latency_us": 18350.08 00:22:00.461 } 00:22:00.461 ], 00:22:00.461 "core_count": 1 00:22:00.461 } 00:22:00.461 06:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:00.461 06:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 99901 00:22:00.461 06:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 99901 ']' 00:22:00.461 06:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 99901 00:22:00.461 06:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:00.720 06:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:00.720 06:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99901 00:22:00.720 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:00.720 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:00.720 killing process with pid 99901 00:22:00.720 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99901' 00:22:00.720 Received shutdown signal, test time was about 10.000000 seconds 00:22:00.720 00:22:00.720 Latency(us) 00:22:00.720 [2024-11-27T06:24:20.250Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.720 [2024-11-27T06:24:20.250Z] =================================================================================================================== 00:22:00.720 [2024-11-27T06:24:20.250Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:00.720 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 99901 00:22:00.720 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 99901 00:22:00.720 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 99858 00:22:00.720 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 99858 ']' 00:22:00.720 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 99858 00:22:00.720 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:00.720 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:00.720 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99858 00:22:00.720 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:00.720 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:00.720 killing process with pid 99858 00:22:00.720 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99858' 00:22:00.720 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 99858 00:22:00.720 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 99858 00:22:00.979 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:22:00.979 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:00.979 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:00.979 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.979 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=100056 00:22:00.979 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:00.979 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 100056 00:22:00.979 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 100056 ']' 00:22:00.979 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.979 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:00.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.979 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.979 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:00.979 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.979 [2024-11-27 06:24:20.296072] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:22:00.979 [2024-11-27 06:24:20.296140] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:00.979 [2024-11-27 06:24:20.410890] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:00.979 [2024-11-27 06:24:20.438100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.979 [2024-11-27 06:24:20.458088] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.979 [2024-11-27 06:24:20.458130] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.979 [2024-11-27 06:24:20.458137] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:00.979 [2024-11-27 06:24:20.458141] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:00.979 [2024-11-27 06:24:20.458146] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.979 [2024-11-27 06:24:20.458398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.914 06:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:01.914 06:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:01.914 06:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:01.914 06:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:01.914 06:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.914 06:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:01.914 06:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.6fiugU9PEv 00:22:01.914 06:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.6fiugU9PEv 00:22:01.914 06:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:01.914 [2024-11-27 06:24:21.356986] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:01.914 06:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:02.173 06:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:22:02.431 [2024-11-27 06:24:21.737033] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:02.431 [2024-11-27 06:24:21.737195] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:02.431 06:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:02.431 malloc0 00:22:02.431 06:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:02.689 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.6fiugU9PEv 00:22:02.946 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:03.204 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=100162 00:22:03.204 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:03.204 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:03.204 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 100162 /var/tmp/bdevperf.sock 00:22:03.204 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 100162 ']' 00:22:03.204 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:03.204 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:03.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:03.204 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:03.204 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:03.204 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.204 [2024-11-27 06:24:22.575456] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:22:03.204 [2024-11-27 06:24:22.575514] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100162 ] 00:22:03.204 [2024-11-27 06:24:22.686363] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:03.204 [2024-11-27 06:24:22.709464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.204 [2024-11-27 06:24:22.728255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.462 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:03.462 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:03.462 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6fiugU9PEv 00:22:03.719 06:24:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:03.719 [2024-11-27 06:24:23.186881] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:03.977 nvme0n1 00:22:03.977 06:24:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:03.977 Running I/O for 1 seconds... 00:22:04.912 7040.00 IOPS, 27.50 MiB/s 00:22:04.912 Latency(us) 00:22:04.912 [2024-11-27T06:24:24.442Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.912 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:04.912 Verification LBA range: start 0x0 length 0x2000 00:22:04.912 nvme0n1 : 1.01 7100.71 27.74 0.00 0.00 17916.32 3604.48 15930.29 00:22:04.912 [2024-11-27T06:24:24.442Z] =================================================================================================================== 00:22:04.912 [2024-11-27T06:24:24.442Z] Total : 7100.71 27.74 0.00 0.00 17916.32 3604.48 15930.29 00:22:04.912 { 00:22:04.912 "results": [ 00:22:04.912 { 00:22:04.912 "job": "nvme0n1", 00:22:04.912 "core_mask": "0x2", 00:22:04.912 "workload": "verify", 00:22:04.912 "status": "finished", 00:22:04.912 "verify_range": { 00:22:04.912 "start": 0, 00:22:04.912 "length": 8192 00:22:04.912 }, 00:22:04.912 "queue_depth": 128, 00:22:04.912 "io_size": 4096, 00:22:04.912 "runtime": 1.009477, 00:22:04.912 "iops": 7100.706603518455, 00:22:04.912 "mibps": 27.737135169993966, 00:22:04.912 "io_failed": 0, 00:22:04.912 "io_timeout": 0, 00:22:04.912 "avg_latency_us": 17916.316483516486, 00:22:04.912 "min_latency_us": 3604.48, 00:22:04.912 "max_latency_us": 15930.289230769231 00:22:04.912 } 00:22:04.912 ], 00:22:04.912 "core_count": 1 00:22:04.912 } 00:22:04.912 06:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 100162 00:22:04.912 06:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 100162 ']' 00:22:04.912 06:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 100162 00:22:04.912 06:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:04.912 06:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:04.912 06:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100162 00:22:04.912 06:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:04.912 06:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:04.912 killing process with pid 100162 00:22:04.912 06:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100162' 00:22:04.912 06:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 100162 00:22:04.912 Received shutdown signal, test time was about 1.000000 seconds 00:22:04.912 00:22:04.912 Latency(us) 00:22:04.912 [2024-11-27T06:24:24.442Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.912 [2024-11-27T06:24:24.442Z] =================================================================================================================== 00:22:04.912 [2024-11-27T06:24:24.442Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:04.912 06:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 100162 00:22:05.175 06:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 100056 00:22:05.175 06:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 100056 ']' 00:22:05.175 06:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 100056 00:22:05.175 06:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:05.175 06:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:05.175 06:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100056 00:22:05.175 06:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:05.175 06:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:05.175 killing process with pid 100056 00:22:05.175 06:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100056' 00:22:05.175 06:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 100056 00:22:05.175 06:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 100056 00:22:05.175 06:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:22:05.175 06:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:05.175 06:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:05.175 06:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.175 06:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=100217 00:22:05.175 06:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:05.175 06:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 100217 00:22:05.175 06:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 100217 ']' 00:22:05.175 06:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.175 06:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:05.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.175 06:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.175 06:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:05.175 06:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.175 [2024-11-27 06:24:24.697162] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:22:05.175 [2024-11-27 06:24:24.697219] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.433 [2024-11-27 06:24:24.811802] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:05.433 [2024-11-27 06:24:24.838735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.433 [2024-11-27 06:24:24.856872] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.433 [2024-11-27 06:24:24.856934] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.433 [2024-11-27 06:24:24.856941] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.433 [2024-11-27 06:24:24.856947] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.433 [2024-11-27 06:24:24.856951] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.433 [2024-11-27 06:24:24.857195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.368 06:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:06.368 06:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:06.368 06:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:06.368 06:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:06.368 06:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.368 06:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.368 06:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:22:06.368 06:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.368 06:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.368 [2024-11-27 06:24:25.623455] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:06.368 malloc0 00:22:06.368 [2024-11-27 06:24:25.650083] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:06.368 [2024-11-27 06:24:25.650225] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:06.368 06:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.368 06:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=100263 00:22:06.368 06:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 100263 /var/tmp/bdevperf.sock 00:22:06.368 06:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:06.368 06:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 100263 ']' 00:22:06.368 06:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:06.368 06:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:06.368 06:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:06.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:06.368 06:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:06.368 06:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.368 [2024-11-27 06:24:25.713570] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:22:06.368 [2024-11-27 06:24:25.713628] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100263 ] 00:22:06.368 [2024-11-27 06:24:25.827361] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:06.368 [2024-11-27 06:24:25.855419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.368 [2024-11-27 06:24:25.874860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.627 06:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:06.627 06:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:06.627 06:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6fiugU9PEv 00:22:06.627 06:24:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:06.885 [2024-11-27 06:24:26.341389] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:07.142 nvme0n1 00:22:07.142 06:24:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:07.142 Running I/O for 1 seconds... 00:22:08.075 5657.00 IOPS, 22.10 MiB/s 00:22:08.075 Latency(us) 00:22:08.075 [2024-11-27T06:24:27.605Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.075 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:08.075 Verification LBA range: start 0x0 length 0x2000 00:22:08.075 nvme0n1 : 1.01 5717.26 22.33 0.00 0.00 22225.60 4335.46 17644.31 00:22:08.075 [2024-11-27T06:24:27.605Z] =================================================================================================================== 00:22:08.075 [2024-11-27T06:24:27.605Z] Total : 5717.26 22.33 0.00 0.00 22225.60 4335.46 17644.31 00:22:08.075 { 00:22:08.075 "results": [ 00:22:08.075 { 00:22:08.075 "job": "nvme0n1", 00:22:08.075 "core_mask": "0x2", 00:22:08.075 "workload": "verify", 00:22:08.075 "status": "finished", 00:22:08.075 "verify_range": { 00:22:08.075 "start": 0, 00:22:08.075 "length": 8192 00:22:08.075 }, 00:22:08.075 "queue_depth": 128, 00:22:08.075 "io_size": 4096, 00:22:08.075 "runtime": 1.011849, 00:22:08.075 "iops": 5717.256230919831, 00:22:08.075 "mibps": 22.33303215203059, 00:22:08.075 "io_failed": 0, 00:22:08.075 "io_timeout": 0, 00:22:08.075 "avg_latency_us": 22225.599136227644, 00:22:08.075 "min_latency_us": 4335.458461538462, 00:22:08.075 "max_latency_us": 17644.30769230769 00:22:08.075 } 00:22:08.075 ], 00:22:08.075 "core_count": 1 00:22:08.075 } 00:22:08.075 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:22:08.075 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.075 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:08.335 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.335 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:22:08.335 "subsystems": [ 00:22:08.335 { 00:22:08.335 "subsystem": "keyring", 00:22:08.335 "config": [ 00:22:08.335 { 00:22:08.335 "method": "keyring_file_add_key", 00:22:08.335 "params": { 00:22:08.335 "name": "key0", 00:22:08.335 "path": "/tmp/tmp.6fiugU9PEv" 00:22:08.335 } 00:22:08.335 } 00:22:08.335 ] 00:22:08.335 }, 00:22:08.335 { 00:22:08.335 "subsystem": "iobuf", 00:22:08.335 "config": [ 00:22:08.335 { 00:22:08.335 "method": "iobuf_set_options", 00:22:08.335 "params": { 00:22:08.335 "enable_numa": false, 00:22:08.335 "large_bufsize": 135168, 00:22:08.335 "large_pool_count": 1024, 00:22:08.335 "small_bufsize": 8192, 00:22:08.335 "small_pool_count": 8192 00:22:08.335 } 00:22:08.335 } 00:22:08.335 ] 00:22:08.335 }, 00:22:08.335 { 00:22:08.335 "subsystem": "sock", 00:22:08.335 "config": [ 00:22:08.335 { 00:22:08.335 "method": "sock_set_default_impl", 00:22:08.335 "params": { 00:22:08.335 "impl_name": "posix" 00:22:08.335 } 00:22:08.335 }, 00:22:08.335 { 00:22:08.335 "method": "sock_impl_set_options", 00:22:08.335 "params": { 00:22:08.335 "enable_ktls": false, 00:22:08.335 "enable_placement_id": 0, 00:22:08.335 "enable_quickack": false, 00:22:08.335 "enable_recv_pipe": true, 00:22:08.335 "enable_zerocopy_send_client": false, 00:22:08.335 "enable_zerocopy_send_server": true, 00:22:08.335 "impl_name": "ssl", 00:22:08.335 "recv_buf_size": 4096, 00:22:08.335 "send_buf_size": 4096, 00:22:08.335 "tls_version": 0, 00:22:08.335 "zerocopy_threshold": 0 00:22:08.335 } 00:22:08.335 }, 00:22:08.335 { 00:22:08.335 "method": "sock_impl_set_options", 00:22:08.335 "params": { 00:22:08.335 "enable_ktls": false, 00:22:08.335 "enable_placement_id": 0, 00:22:08.335 "enable_quickack": false, 00:22:08.335 "enable_recv_pipe": true, 00:22:08.335 "enable_zerocopy_send_client": false, 00:22:08.335 "enable_zerocopy_send_server": true, 00:22:08.335 "impl_name": "posix", 00:22:08.335 "recv_buf_size": 2097152, 00:22:08.335 "send_buf_size": 2097152, 00:22:08.335 "tls_version": 0, 00:22:08.335 "zerocopy_threshold": 0 00:22:08.335 } 00:22:08.335 } 00:22:08.335 ] 00:22:08.335 }, 00:22:08.335 { 00:22:08.335 "subsystem": "vmd", 00:22:08.335 "config": [] 00:22:08.335 }, 00:22:08.335 { 00:22:08.335 "subsystem": "accel", 00:22:08.335 "config": [ 00:22:08.335 { 00:22:08.335 "method": "accel_set_options", 00:22:08.335 "params": { 00:22:08.335 "buf_count": 2048, 00:22:08.335 "large_cache_size": 16, 00:22:08.335 "sequence_count": 2048, 00:22:08.335 "small_cache_size": 128, 00:22:08.335 "task_count": 2048 00:22:08.335 } 00:22:08.335 } 00:22:08.335 ] 00:22:08.335 }, 00:22:08.335 { 00:22:08.335 "subsystem": "bdev", 00:22:08.335 "config": [ 00:22:08.335 { 00:22:08.335 "method": "bdev_set_options", 00:22:08.335 "params": { 00:22:08.335 "bdev_auto_examine": true, 00:22:08.335 "bdev_io_cache_size": 256, 00:22:08.335 "bdev_io_pool_size": 65535, 00:22:08.335 "iobuf_large_cache_size": 16, 00:22:08.335 "iobuf_small_cache_size": 128 00:22:08.335 } 00:22:08.335 }, 00:22:08.335 { 00:22:08.335 "method": "bdev_raid_set_options", 00:22:08.335 "params": { 00:22:08.335 "process_max_bandwidth_mb_sec": 0, 00:22:08.335 "process_window_size_kb": 1024 00:22:08.335 } 00:22:08.335 }, 00:22:08.335 { 00:22:08.335 "method": "bdev_iscsi_set_options", 00:22:08.335 "params": { 00:22:08.335 "timeout_sec": 30 00:22:08.335 } 00:22:08.335 }, 00:22:08.335 { 00:22:08.335 "method": "bdev_nvme_set_options", 00:22:08.335 "params": { 00:22:08.335 "action_on_timeout": "none", 00:22:08.335 "allow_accel_sequence": false, 00:22:08.335 "arbitration_burst": 0, 00:22:08.335 "bdev_retry_count": 3, 00:22:08.335 "ctrlr_loss_timeout_sec": 0, 00:22:08.335 "delay_cmd_submit": true, 00:22:08.335 "dhchap_dhgroups": [ 00:22:08.335 "null", 00:22:08.335 "ffdhe2048", 00:22:08.335 "ffdhe3072", 00:22:08.335 "ffdhe4096", 00:22:08.335 "ffdhe6144", 00:22:08.335 "ffdhe8192" 00:22:08.335 ], 00:22:08.335 "dhchap_digests": [ 00:22:08.335 "sha256", 00:22:08.335 "sha384", 00:22:08.335 "sha512" 00:22:08.335 ], 00:22:08.335 "disable_auto_failback": false, 00:22:08.335 "fast_io_fail_timeout_sec": 0, 00:22:08.335 "generate_uuids": false, 00:22:08.335 "high_priority_weight": 0, 00:22:08.335 "io_path_stat": false, 00:22:08.335 "io_queue_requests": 0, 00:22:08.335 "keep_alive_timeout_ms": 10000, 00:22:08.335 "low_priority_weight": 0, 00:22:08.335 "medium_priority_weight": 0, 00:22:08.335 "nvme_adminq_poll_period_us": 10000, 00:22:08.335 "nvme_error_stat": false, 00:22:08.335 "nvme_ioq_poll_period_us": 0, 00:22:08.335 "rdma_cm_event_timeout_ms": 0, 00:22:08.335 "rdma_max_cq_size": 0, 00:22:08.335 "rdma_srq_size": 0, 00:22:08.335 "reconnect_delay_sec": 0, 00:22:08.335 "timeout_admin_us": 0, 00:22:08.335 "timeout_us": 0, 00:22:08.335 "transport_ack_timeout": 0, 00:22:08.335 "transport_retry_count": 4, 00:22:08.335 "transport_tos": 0 00:22:08.335 } 00:22:08.335 }, 00:22:08.335 { 00:22:08.335 "method": "bdev_nvme_set_hotplug", 00:22:08.335 "params": { 00:22:08.335 "enable": false, 00:22:08.335 "period_us": 100000 00:22:08.335 } 00:22:08.335 }, 00:22:08.335 { 00:22:08.335 "method": "bdev_malloc_create", 00:22:08.335 "params": { 00:22:08.335 "block_size": 4096, 00:22:08.335 "dif_is_head_of_md": false, 00:22:08.335 "dif_pi_format": 0, 00:22:08.335 "dif_type": 0, 00:22:08.335 "md_size": 0, 00:22:08.335 "name": "malloc0", 00:22:08.335 "num_blocks": 8192, 00:22:08.335 "optimal_io_boundary": 0, 00:22:08.335 "physical_block_size": 4096, 00:22:08.335 "uuid": "3db28a1c-5ca0-46a7-b322-b5b804870d73" 00:22:08.335 } 00:22:08.335 }, 00:22:08.335 { 00:22:08.335 "method": "bdev_wait_for_examine" 00:22:08.335 } 00:22:08.335 ] 00:22:08.335 }, 00:22:08.335 { 00:22:08.335 "subsystem": "nbd", 00:22:08.335 "config": [] 00:22:08.335 }, 00:22:08.335 { 00:22:08.335 "subsystem": "scheduler", 00:22:08.335 "config": [ 00:22:08.335 { 00:22:08.335 "method": "framework_set_scheduler", 00:22:08.335 "params": { 00:22:08.335 "name": "static" 00:22:08.335 } 00:22:08.335 } 00:22:08.335 ] 00:22:08.335 }, 00:22:08.335 { 00:22:08.335 "subsystem": "nvmf", 00:22:08.335 "config": [ 00:22:08.335 { 00:22:08.335 "method": "nvmf_set_config", 00:22:08.335 "params": { 00:22:08.335 "admin_cmd_passthru": { 00:22:08.335 "identify_ctrlr": false 00:22:08.335 }, 00:22:08.335 "dhchap_dhgroups": [ 00:22:08.335 "null", 00:22:08.335 "ffdhe2048", 00:22:08.335 "ffdhe3072", 00:22:08.335 "ffdhe4096", 00:22:08.335 "ffdhe6144", 00:22:08.335 "ffdhe8192" 00:22:08.335 ], 00:22:08.335 "dhchap_digests": [ 00:22:08.335 "sha256", 00:22:08.335 "sha384", 00:22:08.335 "sha512" 00:22:08.335 ], 00:22:08.335 "discovery_filter": "match_any" 00:22:08.335 } 00:22:08.335 }, 00:22:08.335 { 00:22:08.335 "method": "nvmf_set_max_subsystems", 00:22:08.335 "params": { 00:22:08.335 "max_subsystems": 1024 00:22:08.335 } 00:22:08.335 }, 00:22:08.335 { 00:22:08.335 "method": "nvmf_set_crdt", 00:22:08.335 "params": { 00:22:08.335 "crdt1": 0, 00:22:08.335 "crdt2": 0, 00:22:08.335 "crdt3": 0 00:22:08.335 } 00:22:08.335 }, 00:22:08.335 { 00:22:08.335 "method": "nvmf_create_transport", 00:22:08.335 "params": { 00:22:08.335 "abort_timeout_sec": 1, 00:22:08.335 "ack_timeout": 0, 00:22:08.335 "buf_cache_size": 4294967295, 00:22:08.335 "c2h_success": false, 00:22:08.335 "data_wr_pool_size": 0, 00:22:08.335 "dif_insert_or_strip": false, 00:22:08.335 "in_capsule_data_size": 4096, 00:22:08.335 "io_unit_size": 131072, 00:22:08.335 "max_aq_depth": 128, 00:22:08.335 "max_io_qpairs_per_ctrlr": 127, 00:22:08.335 "max_io_size": 131072, 00:22:08.335 "max_queue_depth": 128, 00:22:08.335 "num_shared_buffers": 511, 00:22:08.335 "sock_priority": 0, 00:22:08.335 "trtype": "TCP", 00:22:08.335 "zcopy": false 00:22:08.335 } 00:22:08.335 }, 00:22:08.335 { 00:22:08.336 "method": "nvmf_create_subsystem", 00:22:08.336 "params": { 00:22:08.336 "allow_any_host": false, 00:22:08.336 "ana_reporting": false, 00:22:08.336 "max_cntlid": 65519, 00:22:08.336 "max_namespaces": 32, 00:22:08.336 "min_cntlid": 1, 00:22:08.336 "model_number": "SPDK bdev Controller", 00:22:08.336 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.336 "serial_number": "00000000000000000000" 00:22:08.336 } 00:22:08.336 }, 00:22:08.336 { 00:22:08.336 "method": "nvmf_subsystem_add_host", 00:22:08.336 "params": { 00:22:08.336 "host": "nqn.2016-06.io.spdk:host1", 00:22:08.336 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.336 "psk": "key0" 00:22:08.336 } 00:22:08.336 }, 00:22:08.336 { 00:22:08.336 "method": "nvmf_subsystem_add_ns", 00:22:08.336 "params": { 00:22:08.336 "namespace": { 00:22:08.336 "bdev_name": "malloc0", 00:22:08.336 "nguid": "3DB28A1C5CA046A7B322B5B804870D73", 00:22:08.336 "no_auto_visible": false, 00:22:08.336 "nsid": 1, 00:22:08.336 "uuid": "3db28a1c-5ca0-46a7-b322-b5b804870d73" 00:22:08.336 }, 00:22:08.336 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:22:08.336 } 00:22:08.336 }, 00:22:08.336 { 00:22:08.336 "method": "nvmf_subsystem_add_listener", 00:22:08.336 "params": { 00:22:08.336 "listen_address": { 00:22:08.336 "adrfam": "IPv4", 00:22:08.336 "traddr": "10.0.0.3", 00:22:08.336 "trsvcid": "4420", 00:22:08.336 "trtype": "TCP" 00:22:08.336 }, 00:22:08.336 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.336 "secure_channel": false, 00:22:08.336 "sock_impl": "ssl" 00:22:08.336 } 00:22:08.336 } 00:22:08.336 ] 00:22:08.336 } 00:22:08.336 ] 00:22:08.336 }' 00:22:08.336 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:08.594 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:22:08.594 "subsystems": [ 00:22:08.594 { 00:22:08.594 "subsystem": "keyring", 00:22:08.594 "config": [ 00:22:08.594 { 00:22:08.594 "method": "keyring_file_add_key", 00:22:08.594 "params": { 00:22:08.594 "name": "key0", 00:22:08.594 "path": "/tmp/tmp.6fiugU9PEv" 00:22:08.594 } 00:22:08.594 } 00:22:08.594 ] 00:22:08.594 }, 00:22:08.594 { 00:22:08.594 "subsystem": "iobuf", 00:22:08.594 "config": [ 00:22:08.594 { 00:22:08.594 "method": "iobuf_set_options", 00:22:08.594 "params": { 00:22:08.594 "enable_numa": false, 00:22:08.594 "large_bufsize": 135168, 00:22:08.594 "large_pool_count": 1024, 00:22:08.594 "small_bufsize": 8192, 00:22:08.594 "small_pool_count": 8192 00:22:08.594 } 00:22:08.594 } 00:22:08.594 ] 00:22:08.594 }, 00:22:08.594 { 00:22:08.594 "subsystem": "sock", 00:22:08.594 "config": [ 00:22:08.594 { 00:22:08.594 "method": "sock_set_default_impl", 00:22:08.594 "params": { 00:22:08.594 "impl_name": "posix" 00:22:08.594 } 00:22:08.594 }, 00:22:08.594 { 00:22:08.594 "method": "sock_impl_set_options", 00:22:08.594 "params": { 00:22:08.594 "enable_ktls": false, 00:22:08.594 "enable_placement_id": 0, 00:22:08.594 "enable_quickack": false, 00:22:08.594 "enable_recv_pipe": true, 00:22:08.594 "enable_zerocopy_send_client": false, 00:22:08.594 "enable_zerocopy_send_server": true, 00:22:08.594 "impl_name": "ssl", 00:22:08.594 "recv_buf_size": 4096, 00:22:08.594 "send_buf_size": 4096, 00:22:08.594 "tls_version": 0, 00:22:08.594 "zerocopy_threshold": 0 00:22:08.594 } 00:22:08.594 }, 00:22:08.594 { 00:22:08.594 "method": "sock_impl_set_options", 00:22:08.594 "params": { 00:22:08.594 "enable_ktls": false, 00:22:08.594 "enable_placement_id": 0, 00:22:08.594 "enable_quickack": false, 00:22:08.594 "enable_recv_pipe": true, 00:22:08.594 "enable_zerocopy_send_client": false, 00:22:08.594 "enable_zerocopy_send_server": true, 00:22:08.594 "impl_name": "posix", 00:22:08.594 "recv_buf_size": 2097152, 00:22:08.594 "send_buf_size": 2097152, 00:22:08.594 "tls_version": 0, 00:22:08.594 "zerocopy_threshold": 0 00:22:08.594 } 00:22:08.594 } 00:22:08.594 ] 00:22:08.594 }, 00:22:08.594 { 00:22:08.594 "subsystem": "vmd", 00:22:08.594 "config": [] 00:22:08.594 }, 00:22:08.594 { 00:22:08.594 "subsystem": "accel", 00:22:08.594 "config": [ 00:22:08.594 { 00:22:08.594 "method": "accel_set_options", 00:22:08.594 "params": { 00:22:08.594 "buf_count": 2048, 00:22:08.594 "large_cache_size": 16, 00:22:08.595 "sequence_count": 2048, 00:22:08.595 "small_cache_size": 128, 00:22:08.595 "task_count": 2048 00:22:08.595 } 00:22:08.595 } 00:22:08.595 ] 00:22:08.595 }, 00:22:08.595 { 00:22:08.595 "subsystem": "bdev", 00:22:08.595 "config": [ 00:22:08.595 { 00:22:08.595 "method": "bdev_set_options", 00:22:08.595 "params": { 00:22:08.595 "bdev_auto_examine": true, 00:22:08.595 "bdev_io_cache_size": 256, 00:22:08.595 "bdev_io_pool_size": 65535, 00:22:08.595 "iobuf_large_cache_size": 16, 00:22:08.595 "iobuf_small_cache_size": 128 00:22:08.595 } 00:22:08.595 }, 00:22:08.595 { 00:22:08.595 "method": "bdev_raid_set_options", 00:22:08.595 "params": { 00:22:08.595 "process_max_bandwidth_mb_sec": 0, 00:22:08.595 "process_window_size_kb": 1024 00:22:08.595 } 00:22:08.595 }, 00:22:08.595 { 00:22:08.595 "method": "bdev_iscsi_set_options", 00:22:08.595 "params": { 00:22:08.595 "timeout_sec": 30 00:22:08.595 } 00:22:08.595 }, 00:22:08.595 { 00:22:08.595 "method": "bdev_nvme_set_options", 00:22:08.595 "params": { 00:22:08.595 "action_on_timeout": "none", 00:22:08.595 "allow_accel_sequence": false, 00:22:08.595 "arbitration_burst": 0, 00:22:08.595 "bdev_retry_count": 3, 00:22:08.595 "ctrlr_loss_timeout_sec": 0, 00:22:08.595 "delay_cmd_submit": true, 00:22:08.595 "dhchap_dhgroups": [ 00:22:08.595 "null", 00:22:08.595 "ffdhe2048", 00:22:08.595 "ffdhe3072", 00:22:08.595 "ffdhe4096", 00:22:08.595 "ffdhe6144", 00:22:08.595 "ffdhe8192" 00:22:08.595 ], 00:22:08.595 "dhchap_digests": [ 00:22:08.595 "sha256", 00:22:08.595 "sha384", 00:22:08.595 "sha512" 00:22:08.595 ], 00:22:08.595 "disable_auto_failback": false, 00:22:08.595 "fast_io_fail_timeout_sec": 0, 00:22:08.595 "generate_uuids": false, 00:22:08.595 "high_priority_weight": 0, 00:22:08.595 "io_path_stat": false, 00:22:08.595 "io_queue_requests": 512, 00:22:08.595 "keep_alive_timeout_ms": 10000, 00:22:08.595 "low_priority_weight": 0, 00:22:08.595 "medium_priority_weight": 0, 00:22:08.595 "nvme_adminq_poll_period_us": 10000, 00:22:08.595 "nvme_error_stat": false, 00:22:08.595 "nvme_ioq_poll_period_us": 0, 00:22:08.595 "rdma_cm_event_timeout_ms": 0, 00:22:08.595 "rdma_max_cq_size": 0, 00:22:08.595 "rdma_srq_size": 0, 00:22:08.595 "reconnect_delay_sec": 0, 00:22:08.595 "timeout_admin_us": 0, 00:22:08.595 "timeout_us": 0, 00:22:08.595 "transport_ack_timeout": 0, 00:22:08.595 "transport_retry_count": 4, 00:22:08.595 "transport_tos": 0 00:22:08.595 } 00:22:08.595 }, 00:22:08.595 { 00:22:08.595 "method": "bdev_nvme_attach_controller", 00:22:08.595 "params": { 00:22:08.595 "adrfam": "IPv4", 00:22:08.595 "ctrlr_loss_timeout_sec": 0, 00:22:08.595 "ddgst": false, 00:22:08.595 "fast_io_fail_timeout_sec": 0, 00:22:08.595 "hdgst": false, 00:22:08.595 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:08.595 "multipath": "multipath", 00:22:08.595 "name": "nvme0", 00:22:08.595 "prchk_guard": false, 00:22:08.595 "prchk_reftag": false, 00:22:08.595 "psk": "key0", 00:22:08.595 "reconnect_delay_sec": 0, 00:22:08.595 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.595 "traddr": "10.0.0.3", 00:22:08.595 "trsvcid": "4420", 00:22:08.595 "trtype": "TCP" 00:22:08.595 } 00:22:08.595 }, 00:22:08.595 { 00:22:08.595 "method": "bdev_nvme_set_hotplug", 00:22:08.595 "params": { 00:22:08.595 "enable": false, 00:22:08.595 "period_us": 100000 00:22:08.595 } 00:22:08.595 }, 00:22:08.595 { 00:22:08.595 "method": "bdev_enable_histogram", 00:22:08.595 "params": { 00:22:08.595 "enable": true, 00:22:08.595 "name": "nvme0n1" 00:22:08.595 } 00:22:08.595 }, 00:22:08.595 { 00:22:08.595 "method": "bdev_wait_for_examine" 00:22:08.595 } 00:22:08.595 ] 00:22:08.595 }, 00:22:08.595 { 00:22:08.595 "subsystem": "nbd", 00:22:08.595 "config": [] 00:22:08.595 } 00:22:08.595 ] 00:22:08.595 }' 00:22:08.595 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 100263 00:22:08.595 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 100263 ']' 00:22:08.595 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 100263 00:22:08.595 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:08.595 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:08.595 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100263 00:22:08.595 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:08.595 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:08.595 killing process with pid 100263 00:22:08.595 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100263' 00:22:08.595 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 100263 00:22:08.595 Received shutdown signal, test time was about 1.000000 seconds 00:22:08.595 00:22:08.595 Latency(us) 00:22:08.595 [2024-11-27T06:24:28.125Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.595 [2024-11-27T06:24:28.125Z] =================================================================================================================== 00:22:08.595 [2024-11-27T06:24:28.125Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:08.595 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 100263 00:22:08.595 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 100217 00:22:08.595 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 100217 ']' 00:22:08.595 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 100217 00:22:08.595 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:08.595 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:08.595 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100217 00:22:08.595 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:08.595 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:08.595 killing process with pid 100217 00:22:08.595 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100217' 00:22:08.595 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 100217 00:22:08.595 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 100217 00:22:08.854 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:22:08.854 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:08.854 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:08.854 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:22:08.854 "subsystems": [ 00:22:08.854 { 00:22:08.854 "subsystem": "keyring", 00:22:08.854 "config": [ 00:22:08.854 { 00:22:08.854 "method": "keyring_file_add_key", 00:22:08.854 "params": { 00:22:08.854 "name": "key0", 00:22:08.854 "path": "/tmp/tmp.6fiugU9PEv" 00:22:08.854 } 00:22:08.854 } 00:22:08.854 ] 00:22:08.854 }, 00:22:08.854 { 00:22:08.854 "subsystem": "iobuf", 00:22:08.854 "config": [ 00:22:08.854 { 00:22:08.854 "method": "iobuf_set_options", 00:22:08.854 "params": { 00:22:08.854 "enable_numa": false, 00:22:08.854 "large_bufsize": 135168, 00:22:08.854 "large_pool_count": 1024, 00:22:08.854 "small_bufsize": 8192, 00:22:08.854 "small_pool_count": 8192 00:22:08.854 } 00:22:08.854 } 00:22:08.854 ] 00:22:08.854 }, 00:22:08.854 { 00:22:08.854 "subsystem": "sock", 00:22:08.854 "config": [ 00:22:08.854 { 00:22:08.854 "method": "sock_set_default_impl", 00:22:08.854 "params": { 00:22:08.854 "impl_name": "posix" 00:22:08.854 } 00:22:08.854 }, 00:22:08.854 { 00:22:08.854 "method": "sock_impl_set_options", 00:22:08.854 "params": { 00:22:08.854 "enable_ktls": false, 00:22:08.854 "enable_placement_id": 0, 00:22:08.854 "enable_quickack": false, 00:22:08.854 "enable_recv_pipe": true, 00:22:08.854 "enable_zerocopy_send_client": false, 00:22:08.854 "enable_zerocopy_send_server": true, 00:22:08.854 "impl_name": "ssl", 00:22:08.854 "recv_buf_size": 4096, 00:22:08.854 "send_buf_size": 4096, 00:22:08.854 "tls_version": 0, 00:22:08.854 "zerocopy_threshold": 0 00:22:08.854 } 00:22:08.854 }, 00:22:08.854 { 00:22:08.854 "method": "sock_impl_set_options", 00:22:08.854 "params": { 00:22:08.854 "enable_ktls": false, 00:22:08.854 "enable_placement_id": 0, 00:22:08.854 "enable_quickack": false, 00:22:08.854 "enable_recv_pipe": true, 00:22:08.854 "enable_zerocopy_send_client": false, 00:22:08.854 "enable_zerocopy_send_server": true, 00:22:08.854 "impl_name": "posix", 00:22:08.854 "recv_buf_size": 2097152, 00:22:08.854 "send_buf_size": 2097152, 00:22:08.854 "tls_version": 0, 00:22:08.854 "zerocopy_threshold": 0 00:22:08.854 } 00:22:08.854 } 00:22:08.854 ] 00:22:08.854 }, 00:22:08.854 { 00:22:08.854 "subsystem": "vmd", 00:22:08.854 "config": [] 00:22:08.854 }, 00:22:08.854 { 00:22:08.854 "subsystem": "accel", 00:22:08.854 "config": [ 00:22:08.854 { 00:22:08.854 "method": "accel_set_options", 00:22:08.854 "params": { 00:22:08.854 "buf_count": 2048, 00:22:08.854 "large_cache_size": 16, 00:22:08.854 "sequence_count": 2048, 00:22:08.854 "small_cache_size": 128, 00:22:08.854 "task_count": 2048 00:22:08.854 } 00:22:08.854 } 00:22:08.854 ] 00:22:08.854 }, 00:22:08.854 { 00:22:08.854 "subsystem": "bdev", 00:22:08.854 "config": [ 00:22:08.854 { 00:22:08.854 "method": "bdev_set_options", 00:22:08.854 "params": { 00:22:08.854 "bdev_auto_examine": true, 00:22:08.854 "bdev_io_cache_size": 256, 00:22:08.854 "bdev_io_pool_size": 65535, 00:22:08.854 "iobuf_large_cache_size": 16, 00:22:08.854 "iobuf_small_cache_size": 128 00:22:08.854 } 00:22:08.854 }, 00:22:08.854 { 00:22:08.854 "method": "bdev_raid_set_options", 00:22:08.854 "params": { 00:22:08.854 "process_max_bandwidth_mb_sec": 0, 00:22:08.854 "process_window_size_kb": 1024 00:22:08.854 } 00:22:08.854 }, 00:22:08.854 { 00:22:08.854 "method": "bdev_iscsi_set_options", 00:22:08.854 "params": { 00:22:08.854 "timeout_sec": 30 00:22:08.854 } 00:22:08.854 }, 00:22:08.854 { 00:22:08.854 "method": "bdev_nvme_set_options", 00:22:08.854 "params": { 00:22:08.854 "action_on_timeout": "none", 00:22:08.854 "allow_accel_sequence": false, 00:22:08.854 "arbitration_burst": 0, 00:22:08.854 "bdev_retry_count": 3, 00:22:08.854 "ctrlr_loss_timeout_sec": 0, 00:22:08.854 "delay_cmd_submit": true, 00:22:08.854 "dhchap_dhgroups": [ 00:22:08.854 "null", 00:22:08.854 "ffdhe2048", 00:22:08.854 "ffdhe3072", 00:22:08.854 "ffdhe4096", 00:22:08.854 "ffdhe6144", 00:22:08.854 "ffdhe8192" 00:22:08.854 ], 00:22:08.854 "dhchap_digests": [ 00:22:08.854 "sha256", 00:22:08.854 "sha384", 00:22:08.854 "sha512" 00:22:08.854 ], 00:22:08.854 "disable_auto_failback": false, 00:22:08.854 "fast_io_fail_timeout_sec": 0, 00:22:08.854 "generate_uuids": false, 00:22:08.854 "high_priority_weight": 0, 00:22:08.854 "io_path_stat": false, 00:22:08.854 "io_queue_requests": 0, 00:22:08.854 "keep_alive_timeout_ms": 10000, 00:22:08.854 "low_priority_weight": 0, 00:22:08.854 "medium_priority_weight": 0, 00:22:08.854 "nvme_adminq_poll_period_us": 10000, 00:22:08.854 "nvme_error_stat": false, 00:22:08.854 "nvme_ioq_poll_period_us": 0, 00:22:08.854 "rdma_cm_event_timeout_ms": 0, 00:22:08.855 "rdma_max_cq_size": 0, 00:22:08.855 "rdma_srq_size": 0, 00:22:08.855 "reconnect_delay_sec": 0, 00:22:08.855 "timeout_admin_us": 0, 00:22:08.855 "timeout_us": 0, 00:22:08.855 "transport_ack_timeout": 0, 00:22:08.855 "transport_retry_count": 4, 00:22:08.855 "transport_tos": 0 00:22:08.855 } 00:22:08.855 }, 00:22:08.855 { 00:22:08.855 "method": "bdev_nvme_set_hotplug", 00:22:08.855 "params": { 00:22:08.855 "enable": false, 00:22:08.855 "period_us": 100000 00:22:08.855 } 00:22:08.855 }, 00:22:08.855 { 00:22:08.855 "method": "bdev_malloc_create", 00:22:08.855 "params": { 00:22:08.855 "block_size": 4096, 00:22:08.855 "dif_is_head_of_md": false, 00:22:08.855 "dif_pi_format": 0, 00:22:08.855 "dif_type": 0, 00:22:08.855 "md_size": 0, 00:22:08.855 "name": "malloc0", 00:22:08.855 "num_blocks": 8192, 00:22:08.855 "optimal_io_boundary": 0, 00:22:08.855 "physical_block_size": 4096, 00:22:08.855 "uuid": "3db28a1c-5ca0-46a7-b322-b5b804870d73" 00:22:08.855 } 00:22:08.855 }, 00:22:08.855 { 00:22:08.855 "method": "bdev_wait_for_examine" 00:22:08.855 } 00:22:08.855 ] 00:22:08.855 }, 00:22:08.855 { 00:22:08.855 "subsystem": "nbd", 00:22:08.855 "config": [] 00:22:08.855 }, 00:22:08.855 { 00:22:08.855 "subsystem": "scheduler", 00:22:08.855 "config": [ 00:22:08.855 { 00:22:08.855 "method": "framework_set_scheduler", 00:22:08.855 "params": { 00:22:08.855 "name": "static" 00:22:08.855 } 00:22:08.855 } 00:22:08.855 ] 00:22:08.855 }, 00:22:08.855 { 00:22:08.855 "subsystem": "nvmf", 00:22:08.855 "config": [ 00:22:08.855 { 00:22:08.855 "method": "nvmf_set_config", 00:22:08.855 "params": { 00:22:08.855 "admin_cmd_passthru": { 00:22:08.855 "identify_ctrlr": false 00:22:08.855 }, 00:22:08.855 "dhchap_dhgroups": [ 00:22:08.855 "null", 00:22:08.855 "ffdhe2048", 00:22:08.855 "ffdhe3072", 00:22:08.855 "ffdhe4096", 00:22:08.855 "ffdhe6144", 00:22:08.855 "ffdhe8192" 00:22:08.855 ], 00:22:08.855 "dhchap_digests": [ 00:22:08.855 "sha256", 00:22:08.855 "sha384", 00:22:08.855 "sha512" 00:22:08.855 ], 00:22:08.855 "discovery_filter": "match_any" 00:22:08.855 } 00:22:08.855 }, 00:22:08.855 { 00:22:08.855 "method": "nvmf_set_max_subsystems", 00:22:08.855 "params": { 00:22:08.855 "max_subsystems": 1024 00:22:08.855 } 00:22:08.855 }, 00:22:08.855 { 00:22:08.855 "method": "nvmf_set_crdt", 00:22:08.855 "params": { 00:22:08.855 "crdt1": 0, 00:22:08.855 "crdt2": 0, 00:22:08.855 "crdt3": 0 00:22:08.855 } 00:22:08.855 }, 00:22:08.855 { 00:22:08.855 "method": "nvmf_create_transport", 00:22:08.855 "params": { 00:22:08.855 "abort_timeout_sec": 1, 00:22:08.855 "ack_timeout": 0, 00:22:08.855 "buf_cache_size": 4294967295, 00:22:08.855 "c2h_success": false, 00:22:08.855 "data_wr_pool_size": 0, 00:22:08.855 "dif_insert_or_strip": false, 00:22:08.855 "in_capsule_data_size": 4096, 00:22:08.855 "io_unit_size": 131072, 00:22:08.855 "max_aq_depth": 128, 00:22:08.855 "max_io_qpairs_per_ctrlr": 127, 00:22:08.855 "max_io_size": 131072, 00:22:08.855 "max_queue_depth": 128, 00:22:08.855 "num_shared_buffers": 511, 00:22:08.855 "sock_priority": 0, 00:22:08.855 "trtype": "TCP", 00:22:08.855 "zcopy": false 00:22:08.855 } 00:22:08.855 }, 00:22:08.855 { 00:22:08.855 "method": "nvmf_create_subsystem", 00:22:08.855 "params": { 00:22:08.855 "allow_any_host": false, 00:22:08.855 "ana_reporting": false, 00:22:08.855 "max_cntlid": 65519, 00:22:08.855 "max_namespaces": 32, 00:22:08.855 "min_cntlid": 1, 00:22:08.855 "model_number": "SPDK bdev Controller", 00:22:08.855 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.855 "serial_number": "00000000000000000000" 00:22:08.855 } 00:22:08.855 }, 00:22:08.855 { 00:22:08.855 "method": "nvmf_subsystem_add_host", 00:22:08.855 "params": { 00:22:08.855 "host": "nqn.2016-06.io.spdk:host1", 00:22:08.855 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.855 "psk": "key0" 00:22:08.855 } 00:22:08.855 }, 00:22:08.855 { 00:22:08.855 "method": "nvmf_subsystem_add_ns", 00:22:08.855 "params": { 00:22:08.855 "namespace": { 00:22:08.855 "bdev_name": "malloc0", 00:22:08.855 "nguid": "3DB28A1C5CA046A7B322B5B804870D73", 00:22:08.855 "no_auto_visible": false, 00:22:08.855 "nsid": 1, 00:22:08.855 "uuid": "3db28a1c-5ca0-46a7-b322-b5b804870d73" 00:22:08.855 }, 00:22:08.855 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:22:08.855 } 00:22:08.855 }, 00:22:08.855 { 00:22:08.855 "method": "nvmf_subsystem_add_listener", 00:22:08.855 "params": { 00:22:08.855 "listen_address": { 00:22:08.855 "adrfam": "IPv4", 00:22:08.855 "traddr": "10.0.0.3", 00:22:08.855 "trsvcid": "4420", 00:22:08.855 "trtype": "TCP" 00:22:08.855 }, 00:22:08.855 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.855 "secure_channel": false, 00:22:08.855 "sock_impl": "ssl" 00:22:08.855 } 00:22:08.855 } 00:22:08.855 ] 00:22:08.855 } 00:22:08.855 ] 00:22:08.855 }' 00:22:08.855 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:08.855 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=100334 00:22:08.855 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:08.855 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 100334 00:22:08.855 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 100334 ']' 00:22:08.855 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.855 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:08.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.855 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.855 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:08.855 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:08.855 [2024-11-27 06:24:28.244862] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:22:08.855 [2024-11-27 06:24:28.244934] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:08.855 [2024-11-27 06:24:28.359419] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:09.113 [2024-11-27 06:24:28.386915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.113 [2024-11-27 06:24:28.405200] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.113 [2024-11-27 06:24:28.405239] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.113 [2024-11-27 06:24:28.405245] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:09.113 [2024-11-27 06:24:28.405250] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:09.113 [2024-11-27 06:24:28.405255] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.113 [2024-11-27 06:24:28.405536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.113 [2024-11-27 06:24:28.598895] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:09.114 [2024-11-27 06:24:28.630834] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:09.114 [2024-11-27 06:24:28.630990] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:09.679 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:09.679 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:09.679 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:09.679 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:09.679 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:09.679 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:09.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:09.679 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=100377 00:22:09.679 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 100377 /var/tmp/bdevperf.sock 00:22:09.679 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 100377 ']' 00:22:09.679 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:09.679 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:09.679 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:09.679 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:09.679 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:22:09.679 "subsystems": [ 00:22:09.679 { 00:22:09.679 "subsystem": "keyring", 00:22:09.679 "config": [ 00:22:09.679 { 00:22:09.679 "method": "keyring_file_add_key", 00:22:09.679 "params": { 00:22:09.679 "name": "key0", 00:22:09.679 "path": "/tmp/tmp.6fiugU9PEv" 00:22:09.679 } 00:22:09.679 } 00:22:09.679 ] 00:22:09.679 }, 00:22:09.679 { 00:22:09.679 "subsystem": "iobuf", 00:22:09.679 "config": [ 00:22:09.679 { 00:22:09.679 "method": "iobuf_set_options", 00:22:09.679 "params": { 00:22:09.679 "enable_numa": false, 00:22:09.679 "large_bufsize": 135168, 00:22:09.679 "large_pool_count": 1024, 00:22:09.679 "small_bufsize": 8192, 00:22:09.679 "small_pool_count": 8192 00:22:09.679 } 00:22:09.679 } 00:22:09.679 ] 00:22:09.679 }, 00:22:09.679 { 00:22:09.679 "subsystem": "sock", 00:22:09.679 "config": [ 00:22:09.679 { 00:22:09.679 "method": "sock_set_default_impl", 00:22:09.679 "params": { 00:22:09.679 "impl_name": "posix" 00:22:09.679 } 00:22:09.679 }, 00:22:09.679 { 00:22:09.679 "method": "sock_impl_set_options", 00:22:09.679 "params": { 00:22:09.679 "enable_ktls": false, 00:22:09.679 "enable_placement_id": 0, 00:22:09.679 "enable_quickack": false, 00:22:09.679 "enable_recv_pipe": true, 00:22:09.679 "enable_zerocopy_send_client": false, 00:22:09.679 "enable_zerocopy_send_server": true, 00:22:09.679 "impl_name": "ssl", 00:22:09.679 "recv_buf_size": 4096, 00:22:09.679 "send_buf_size": 4096, 00:22:09.679 "tls_version": 0, 00:22:09.679 "zerocopy_threshold": 0 00:22:09.679 } 00:22:09.679 }, 00:22:09.679 { 00:22:09.679 "method": "sock_impl_set_options", 00:22:09.679 "params": { 00:22:09.679 "enable_ktls": false, 00:22:09.679 "enable_placement_id": 0, 00:22:09.679 "enable_quickack": false, 00:22:09.679 "enable_recv_pipe": true, 00:22:09.679 "enable_zerocopy_send_client": false, 00:22:09.679 "enable_zerocopy_send_server": true, 00:22:09.679 "impl_name": "posix", 00:22:09.679 "recv_buf_size": 2097152, 00:22:09.679 "send_buf_size": 2097152, 00:22:09.679 "tls_version": 0, 00:22:09.679 "zerocopy_threshold": 0 00:22:09.679 } 00:22:09.679 } 00:22:09.679 ] 00:22:09.679 }, 00:22:09.679 { 00:22:09.679 "subsystem": "vmd", 00:22:09.679 "config": [] 00:22:09.679 }, 00:22:09.679 { 00:22:09.679 "subsystem": "accel", 00:22:09.679 "config": [ 00:22:09.679 { 00:22:09.679 "method": "accel_set_options", 00:22:09.679 "params": { 00:22:09.679 "buf_count": 2048, 00:22:09.679 "large_cache_size": 16, 00:22:09.679 "sequence_count": 2048, 00:22:09.679 "small_cache_size": 128, 00:22:09.679 "task_count": 2048 00:22:09.679 } 00:22:09.679 } 00:22:09.679 ] 00:22:09.679 }, 00:22:09.679 { 00:22:09.679 "subsystem": "bdev", 00:22:09.679 "config": [ 00:22:09.679 { 00:22:09.679 "method": "bdev_set_options", 00:22:09.679 "params": { 00:22:09.679 "bdev_auto_examine": true, 00:22:09.679 "bdev_io_cache_size": 256, 00:22:09.679 "bdev_io_pool_size": 65535, 00:22:09.679 "iobuf_large_cache_size": 16, 00:22:09.679 "iobuf_small_cache_size": 128 00:22:09.679 } 00:22:09.679 }, 00:22:09.679 { 00:22:09.679 "method": "bdev_raid_set_options", 00:22:09.679 "params": { 00:22:09.679 "process_max_bandwidth_mb_sec": 0, 00:22:09.679 "process_window_size_kb": 1024 00:22:09.679 } 00:22:09.679 }, 00:22:09.680 { 00:22:09.680 "method": "bdev_iscsi_set_options", 00:22:09.680 "params": { 00:22:09.680 "timeout_sec": 30 00:22:09.680 } 00:22:09.680 }, 00:22:09.680 { 00:22:09.680 "method": "bdev_nvme_set_options", 00:22:09.680 "params": { 00:22:09.680 "action_on_timeout": "none", 00:22:09.680 "allow_accel_sequence": false, 00:22:09.680 "arbitration_burst": 0, 00:22:09.680 "bdev_retry_count": 3, 00:22:09.680 "ctrlr_loss_timeout_sec": 0, 00:22:09.680 "delay_cmd_submit": true, 00:22:09.680 "dhchap_dhgroups": [ 00:22:09.680 "null", 00:22:09.680 "ffdhe2048", 00:22:09.680 "ffdhe3072", 00:22:09.680 "ffdhe4096", 00:22:09.680 "ffdhe6144", 00:22:09.680 "ffdhe8192" 00:22:09.680 ], 00:22:09.680 "dhchap_digests": [ 00:22:09.680 "sha256", 00:22:09.680 "sha384", 00:22:09.680 "sha512" 00:22:09.680 ], 00:22:09.680 "disable_auto_failback": false, 00:22:09.680 "fast_io_fail_timeout_sec": 0, 00:22:09.680 "generate_uuids": false, 00:22:09.680 "high_priority_weight": 0, 00:22:09.680 "io_path_stat": false, 00:22:09.680 "io_queue_requests": 512, 00:22:09.680 "keep_alive_timeout_ms": 10000, 00:22:09.680 "low_priority_weight": 0, 00:22:09.680 "medium_priority_weight": 0, 00:22:09.680 "nvme_adminq_poll_period_us": 10000, 00:22:09.680 "nvme_error_stat": false, 00:22:09.680 "nvme_ioq_poll_period_us": 0, 00:22:09.680 "rdma_cm_event_timeout_ms": 0, 00:22:09.680 "rdma_max_cq_size": 0, 00:22:09.680 "rdma_srq_size": 0, 00:22:09.680 "reconnect_delay_sec": 0, 00:22:09.680 "timeout_admin_us": 0, 00:22:09.680 "timeout_us": 0, 00:22:09.680 "transport_ack_timeout": 0, 00:22:09.680 "transport_retry_count": 4, 00:22:09.680 "transport_tos": 0 00:22:09.680 } 00:22:09.680 }, 00:22:09.680 { 00:22:09.680 "method": "bdev_nvme_attach_controller", 00:22:09.680 "params": { 00:22:09.680 "adrfam": "IPv4", 00:22:09.680 "ctrlr_loss_timeout_sec": 0, 00:22:09.680 "ddgst": false, 00:22:09.680 "fast_io_fail_timeout_sec": 0, 00:22:09.680 "hdgst": false, 00:22:09.680 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:09.680 "multipath": "multipath", 00:22:09.680 "name": "nvme0", 00:22:09.680 "prchk_guard": false, 00:22:09.680 "prchk_reftag": false, 00:22:09.680 "psk": "key0", 00:22:09.680 "reconnect_delay_sec": 0, 00:22:09.680 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.680 "traddr": "10.0.0.3", 00:22:09.680 "trsvcid": "4420", 00:22:09.680 "trtype": "TCP" 00:22:09.680 } 00:22:09.680 }, 00:22:09.680 { 00:22:09.680 "method": "bdev_nvme_set_hotplug", 00:22:09.680 "params": { 00:22:09.680 "enable": false, 00:22:09.680 "period_us": 100000 00:22:09.680 } 00:22:09.680 }, 00:22:09.680 { 00:22:09.680 "method": "bdev_enable_histogram", 00:22:09.680 "params": { 00:22:09.680 "enable": true, 00:22:09.680 "name": "nvme0n1" 00:22:09.680 } 00:22:09.680 }, 00:22:09.680 { 00:22:09.680 "method": "bdev_wait_for_examine" 00:22:09.680 } 00:22:09.680 ] 00:22:09.680 }, 00:22:09.680 { 00:22:09.680 "subsystem": "nbd", 00:22:09.680 "config": [] 00:22:09.680 } 00:22:09.680 ] 00:22:09.680 }' 00:22:09.680 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:09.680 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:09.680 [2024-11-27 06:24:29.204605] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:22:09.680 [2024-11-27 06:24:29.204675] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100377 ] 00:22:09.936 [2024-11-27 06:24:29.318611] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:09.936 [2024-11-27 06:24:29.348668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.936 [2024-11-27 06:24:29.368476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:10.193 [2024-11-27 06:24:29.505254] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:10.756 06:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:10.756 06:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:10.756 06:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:22:10.756 06:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:11.033 06:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.033 06:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:11.033 Running I/O for 1 seconds... 00:22:11.982 6587.00 IOPS, 25.73 MiB/s 00:22:11.982 Latency(us) 00:22:11.982 [2024-11-27T06:24:31.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.982 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:11.982 Verification LBA range: start 0x0 length 0x2000 00:22:11.982 nvme0n1 : 1.01 6653.09 25.99 0.00 0.00 19114.95 3352.42 17140.18 00:22:11.982 [2024-11-27T06:24:31.512Z] =================================================================================================================== 00:22:11.982 [2024-11-27T06:24:31.512Z] Total : 6653.09 25.99 0.00 0.00 19114.95 3352.42 17140.18 00:22:11.982 { 00:22:11.982 "results": [ 00:22:11.982 { 00:22:11.982 "job": "nvme0n1", 00:22:11.982 "core_mask": "0x2", 00:22:11.982 "workload": "verify", 00:22:11.982 "status": "finished", 00:22:11.982 "verify_range": { 00:22:11.982 "start": 0, 00:22:11.982 "length": 8192 00:22:11.982 }, 00:22:11.982 "queue_depth": 128, 00:22:11.982 "io_size": 4096, 00:22:11.982 "runtime": 1.009455, 00:22:11.982 "iops": 6653.094986898871, 00:22:11.982 "mibps": 25.988652292573715, 00:22:11.982 "io_failed": 0, 00:22:11.982 "io_timeout": 0, 00:22:11.982 "avg_latency_us": 19114.94565079947, 00:22:11.983 "min_latency_us": 3352.4184615384615, 00:22:11.983 "max_latency_us": 17140.184615384616 00:22:11.983 } 00:22:11.983 ], 00:22:11.983 "core_count": 1 00:22:11.983 } 00:22:11.983 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:22:11.983 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:22:11.983 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:11.983 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:22:11.983 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:22:11.983 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:22:11.983 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:11.983 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:22:11.983 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:22:11.983 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:22:11.983 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:11.983 nvmf_trace.0 00:22:11.983 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:22:11.983 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 100377 00:22:11.983 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 100377 ']' 00:22:11.983 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 100377 00:22:11.983 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:11.983 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:11.983 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100377 00:22:11.983 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:11.983 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:11.983 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100377' 00:22:11.983 killing process with pid 100377 00:22:11.983 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 100377 00:22:11.983 Received shutdown signal, test time was about 1.000000 seconds 00:22:11.983 00:22:11.983 Latency(us) 00:22:11.983 [2024-11-27T06:24:31.513Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.983 [2024-11-27T06:24:31.513Z] =================================================================================================================== 00:22:11.983 [2024-11-27T06:24:31.513Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:11.983 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 100377 00:22:12.241 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:12.241 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:12.241 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:22:12.241 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:12.241 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:22:12.241 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:12.241 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:12.241 rmmod nvme_tcp 00:22:12.241 rmmod nvme_fabrics 00:22:12.241 rmmod nvme_keyring 00:22:12.241 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:12.241 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:22:12.241 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:22:12.241 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 100334 ']' 00:22:12.241 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 100334 00:22:12.241 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 100334 ']' 00:22:12.241 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 100334 00:22:12.241 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:12.241 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:12.241 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100334 00:22:12.241 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:12.241 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:12.241 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100334' 00:22:12.241 killing process with pid 100334 00:22:12.241 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 100334 00:22:12.241 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 100334 00:22:12.498 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:12.498 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:12.498 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:12.498 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:22:12.498 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:22:12.498 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:12.498 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:22:12.498 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:12.498 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:12.498 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:12.498 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:12.498 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:12.498 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:12.498 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:12.498 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:12.498 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:12.498 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:12.498 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:12.498 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:12.498 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:12.498 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:12.498 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:12.756 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:12.756 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.756 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:12.756 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.756 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:22:12.756 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.Pdh6c8pbXP /tmp/tmp.QNw6wesAx8 /tmp/tmp.6fiugU9PEv 00:22:12.757 ************************************ 00:22:12.757 END TEST nvmf_tls 00:22:12.757 ************************************ 00:22:12.757 00:22:12.757 real 1m19.049s 00:22:12.757 user 2m9.847s 00:22:12.757 sys 0m21.920s 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:12.757 ************************************ 00:22:12.757 START TEST nvmf_fips 00:22:12.757 ************************************ 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:12.757 * Looking for test storage... 00:22:12.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:12.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.757 --rc genhtml_branch_coverage=1 00:22:12.757 --rc genhtml_function_coverage=1 00:22:12.757 --rc genhtml_legend=1 00:22:12.757 --rc geninfo_all_blocks=1 00:22:12.757 --rc geninfo_unexecuted_blocks=1 00:22:12.757 00:22:12.757 ' 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:12.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.757 --rc genhtml_branch_coverage=1 00:22:12.757 --rc genhtml_function_coverage=1 00:22:12.757 --rc genhtml_legend=1 00:22:12.757 --rc geninfo_all_blocks=1 00:22:12.757 --rc geninfo_unexecuted_blocks=1 00:22:12.757 00:22:12.757 ' 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:12.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.757 --rc genhtml_branch_coverage=1 00:22:12.757 --rc genhtml_function_coverage=1 00:22:12.757 --rc genhtml_legend=1 00:22:12.757 --rc geninfo_all_blocks=1 00:22:12.757 --rc geninfo_unexecuted_blocks=1 00:22:12.757 00:22:12.757 ' 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:12.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.757 --rc genhtml_branch_coverage=1 00:22:12.757 --rc genhtml_function_coverage=1 00:22:12.757 --rc genhtml_legend=1 00:22:12.757 --rc geninfo_all_blocks=1 00:22:12.757 --rc geninfo_unexecuted_blocks=1 00:22:12.757 00:22:12.757 ' 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:12.757 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:12.758 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:22:12.758 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:22:13.016 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:22:13.017 Error setting digest 00:22:13.017 40B29B91377F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:22:13.017 40B29B91377F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:13.017 Cannot find device "nvmf_init_br" 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:13.017 Cannot find device "nvmf_init_br2" 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:13.017 Cannot find device "nvmf_tgt_br" 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:13.017 Cannot find device "nvmf_tgt_br2" 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:13.017 Cannot find device "nvmf_init_br" 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:13.017 Cannot find device "nvmf_init_br2" 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:13.017 Cannot find device "nvmf_tgt_br" 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:13.017 Cannot find device "nvmf_tgt_br2" 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:13.017 Cannot find device "nvmf_br" 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:13.017 Cannot find device "nvmf_init_if" 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:13.017 Cannot find device "nvmf_init_if2" 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:13.017 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:13.017 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:13.017 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:13.276 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:13.276 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:13.276 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:13.276 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:13.276 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:13.276 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:13.276 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:13.276 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:13.276 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:13.276 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:13.276 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:13.276 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:13.276 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:13.276 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:13.276 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:13.276 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:13.276 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:13.276 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:13.276 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:13.276 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:13.276 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:13.276 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:22:13.276 00:22:13.276 --- 10.0.0.3 ping statistics --- 00:22:13.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.276 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:22:13.276 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:13.276 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:13.276 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 00:22:13.276 00:22:13.276 --- 10.0.0.4 ping statistics --- 00:22:13.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.277 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:22:13.277 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:13.277 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:13.277 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:22:13.277 00:22:13.277 --- 10.0.0.1 ping statistics --- 00:22:13.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.277 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:22:13.277 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:13.277 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:13.277 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms 00:22:13.277 00:22:13.277 --- 10.0.0.2 ping statistics --- 00:22:13.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.277 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:22:13.277 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:13.277 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:22:13.277 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:13.277 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:13.277 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:13.277 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:13.277 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:13.277 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:13.277 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:13.277 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:22:13.277 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:13.277 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:13.277 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:13.277 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=100710 00:22:13.277 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 100710 00:22:13.277 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:13.277 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 100710 ']' 00:22:13.277 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.277 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:13.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:13.277 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.277 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:13.277 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:13.277 [2024-11-27 06:24:32.704266] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:22:13.277 [2024-11-27 06:24:32.704326] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:13.536 [2024-11-27 06:24:32.821042] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:13.536 [2024-11-27 06:24:32.848947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.536 [2024-11-27 06:24:32.867251] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:13.536 [2024-11-27 06:24:32.867283] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:13.536 [2024-11-27 06:24:32.867289] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:13.536 [2024-11-27 06:24:32.867294] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:13.536 [2024-11-27 06:24:32.867298] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:13.536 [2024-11-27 06:24:32.867543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:14.101 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:14.101 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:22:14.101 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:14.101 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:14.101 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:14.101 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:14.101 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:22:14.101 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:14.101 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:22:14.101 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.dXk 00:22:14.101 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:14.101 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.dXk 00:22:14.101 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.dXk 00:22:14.101 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.dXk 00:22:14.101 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:14.358 [2024-11-27 06:24:33.778117] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:14.358 [2024-11-27 06:24:33.794070] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:14.358 [2024-11-27 06:24:33.794196] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:14.358 malloc0 00:22:14.358 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:14.358 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=100764 00:22:14.358 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 100764 /var/tmp/bdevperf.sock 00:22:14.358 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 100764 ']' 00:22:14.358 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:14.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:14.358 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:14.358 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:14.358 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:14.358 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:14.359 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:14.616 [2024-11-27 06:24:33.905465] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:22:14.616 [2024-11-27 06:24:33.905527] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100764 ] 00:22:14.616 [2024-11-27 06:24:34.019500] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:14.616 [2024-11-27 06:24:34.049027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.616 [2024-11-27 06:24:34.068606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:14.616 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:14.616 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:22:14.616 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.dXk 00:22:14.874 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:15.132 [2024-11-27 06:24:34.534932] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:15.132 TLSTESTn1 00:22:15.132 06:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:15.390 Running I/O for 10 seconds... 00:22:17.256 5645.00 IOPS, 22.05 MiB/s [2024-11-27T06:24:37.719Z] 5655.00 IOPS, 22.09 MiB/s [2024-11-27T06:24:39.093Z] 5915.33 IOPS, 23.11 MiB/s [2024-11-27T06:24:40.031Z] 6173.50 IOPS, 24.12 MiB/s [2024-11-27T06:24:40.966Z] 6143.80 IOPS, 24.00 MiB/s [2024-11-27T06:24:41.900Z] 6287.00 IOPS, 24.56 MiB/s [2024-11-27T06:24:42.835Z] 6380.43 IOPS, 24.92 MiB/s [2024-11-27T06:24:43.769Z] 6442.50 IOPS, 25.17 MiB/s [2024-11-27T06:24:45.143Z] 6487.89 IOPS, 25.34 MiB/s [2024-11-27T06:24:45.143Z] 6449.70 IOPS, 25.19 MiB/s 00:22:25.613 Latency(us) 00:22:25.613 [2024-11-27T06:24:45.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.613 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:25.613 Verification LBA range: start 0x0 length 0x2000 00:22:25.613 TLSTESTn1 : 10.02 6451.59 25.20 0.00 0.00 19802.57 5671.38 49000.76 00:22:25.613 [2024-11-27T06:24:45.143Z] =================================================================================================================== 00:22:25.613 [2024-11-27T06:24:45.143Z] Total : 6451.59 25.20 0.00 0.00 19802.57 5671.38 49000.76 00:22:25.613 { 00:22:25.613 "results": [ 00:22:25.613 { 00:22:25.613 "job": "TLSTESTn1", 00:22:25.613 "core_mask": "0x4", 00:22:25.613 "workload": "verify", 00:22:25.613 "status": "finished", 00:22:25.613 "verify_range": { 00:22:25.613 "start": 0, 00:22:25.613 "length": 8192 00:22:25.613 }, 00:22:25.613 "queue_depth": 128, 00:22:25.613 "io_size": 4096, 00:22:25.613 "runtime": 10.016441, 00:22:25.613 "iops": 6451.592936053834, 00:22:25.613 "mibps": 25.20153490646029, 00:22:25.613 "io_failed": 0, 00:22:25.613 "io_timeout": 0, 00:22:25.613 "avg_latency_us": 19802.565990243857, 00:22:25.613 "min_latency_us": 5671.384615384615, 00:22:25.613 "max_latency_us": 49000.763076923075 00:22:25.613 } 00:22:25.613 ], 00:22:25.613 "core_count": 1 00:22:25.613 } 00:22:25.613 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:25.613 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:25.613 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:22:25.613 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:22:25.613 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:22:25.613 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:25.613 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:22:25.613 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:22:25.613 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:22:25.613 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:25.613 nvmf_trace.0 00:22:25.613 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:22:25.613 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 100764 00:22:25.613 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 100764 ']' 00:22:25.613 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 100764 00:22:25.613 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:22:25.613 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:25.613 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100764 00:22:25.613 killing process with pid 100764 00:22:25.613 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:25.613 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:25.613 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100764' 00:22:25.613 Received shutdown signal, test time was about 10.000000 seconds 00:22:25.613 00:22:25.613 Latency(us) 00:22:25.613 [2024-11-27T06:24:45.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.613 [2024-11-27T06:24:45.143Z] =================================================================================================================== 00:22:25.613 [2024-11-27T06:24:45.143Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:25.613 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 100764 00:22:25.614 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 100764 00:22:25.614 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:25.614 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:25.614 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:22:25.614 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:25.614 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:22:25.614 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:25.614 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:25.614 rmmod nvme_tcp 00:22:25.614 rmmod nvme_fabrics 00:22:25.614 rmmod nvme_keyring 00:22:25.614 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:25.614 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:22:25.614 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:22:25.614 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 100710 ']' 00:22:25.614 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 100710 00:22:25.614 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 100710 ']' 00:22:25.614 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 100710 00:22:25.614 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:22:25.614 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:25.614 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100710 00:22:25.614 killing process with pid 100710 00:22:25.614 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:25.614 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:25.614 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100710' 00:22:25.614 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 100710 00:22:25.614 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 100710 00:22:25.886 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:25.886 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:25.886 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:25.886 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:22:25.886 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:22:25.886 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:25.886 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:22:25.886 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:25.886 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:25.886 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:25.886 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:25.886 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:25.886 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:25.886 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:25.886 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:25.886 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:25.886 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:25.886 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:25.886 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:25.886 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:25.886 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:25.886 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:25.886 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:25.886 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.886 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:25.886 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.dXk 00:22:26.144 00:22:26.144 real 0m13.334s 00:22:26.144 user 0m19.103s 00:22:26.144 sys 0m4.496s 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:26.144 ************************************ 00:22:26.144 END TEST nvmf_fips 00:22:26.144 ************************************ 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:26.144 ************************************ 00:22:26.144 START TEST nvmf_control_msg_list 00:22:26.144 ************************************ 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:26.144 * Looking for test storage... 00:22:26.144 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:22:26.144 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:26.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.145 --rc genhtml_branch_coverage=1 00:22:26.145 --rc genhtml_function_coverage=1 00:22:26.145 --rc genhtml_legend=1 00:22:26.145 --rc geninfo_all_blocks=1 00:22:26.145 --rc geninfo_unexecuted_blocks=1 00:22:26.145 00:22:26.145 ' 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:26.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.145 --rc genhtml_branch_coverage=1 00:22:26.145 --rc genhtml_function_coverage=1 00:22:26.145 --rc genhtml_legend=1 00:22:26.145 --rc geninfo_all_blocks=1 00:22:26.145 --rc geninfo_unexecuted_blocks=1 00:22:26.145 00:22:26.145 ' 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:26.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.145 --rc genhtml_branch_coverage=1 00:22:26.145 --rc genhtml_function_coverage=1 00:22:26.145 --rc genhtml_legend=1 00:22:26.145 --rc geninfo_all_blocks=1 00:22:26.145 --rc geninfo_unexecuted_blocks=1 00:22:26.145 00:22:26.145 ' 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:26.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.145 --rc genhtml_branch_coverage=1 00:22:26.145 --rc genhtml_function_coverage=1 00:22:26.145 --rc genhtml_legend=1 00:22:26.145 --rc geninfo_all_blocks=1 00:22:26.145 --rc geninfo_unexecuted_blocks=1 00:22:26.145 00:22:26.145 ' 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:26.145 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:26.145 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:26.146 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:26.146 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:26.146 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:26.146 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:26.146 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:26.146 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:26.146 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:26.146 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:26.146 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:26.146 Cannot find device "nvmf_init_br" 00:22:26.146 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:22:26.146 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:26.146 Cannot find device "nvmf_init_br2" 00:22:26.146 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:22:26.146 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:26.146 Cannot find device "nvmf_tgt_br" 00:22:26.146 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:22:26.146 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:26.146 Cannot find device "nvmf_tgt_br2" 00:22:26.146 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:22:26.146 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:26.432 Cannot find device "nvmf_init_br" 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:26.432 Cannot find device "nvmf_init_br2" 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:26.432 Cannot find device "nvmf_tgt_br" 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:26.432 Cannot find device "nvmf_tgt_br2" 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:26.432 Cannot find device "nvmf_br" 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:26.432 Cannot find device "nvmf_init_if" 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:26.432 Cannot find device "nvmf_init_if2" 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:26.432 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:26.432 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:26.432 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:26.433 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:26.433 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:22:26.433 00:22:26.433 --- 10.0.0.3 ping statistics --- 00:22:26.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.433 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:22:26.433 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:26.433 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:26.433 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:22:26.433 00:22:26.433 --- 10.0.0.4 ping statistics --- 00:22:26.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.433 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:22:26.433 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:26.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:26.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:22:26.433 00:22:26.433 --- 10.0.0.1 ping statistics --- 00:22:26.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.433 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:22:26.433 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:26.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:26.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:22:26.433 00:22:26.433 --- 10.0.0.2 ping statistics --- 00:22:26.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.433 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:22:26.433 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:26.433 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:22:26.433 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:26.433 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:26.433 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:26.433 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:26.433 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:26.433 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:26.433 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:26.691 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:22:26.691 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:26.691 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:26.691 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:26.691 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=101150 00:22:26.691 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 101150 00:22:26.691 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:26.691 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 101150 ']' 00:22:26.691 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.691 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:26.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.691 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.691 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:26.691 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:26.691 [2024-11-27 06:24:45.997239] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:22:26.691 [2024-11-27 06:24:45.997304] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:26.691 [2024-11-27 06:24:46.116531] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:26.691 [2024-11-27 06:24:46.143214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.691 [2024-11-27 06:24:46.161068] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:26.691 [2024-11-27 06:24:46.161107] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:26.691 [2024-11-27 06:24:46.161113] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:26.691 [2024-11-27 06:24:46.161119] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:26.691 [2024-11-27 06:24:46.161123] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:26.691 [2024-11-27 06:24:46.161376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.626 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:27.626 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:22:27.626 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:27.626 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:27.626 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:27.626 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:27.626 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:27.626 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:22:27.626 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:22:27.626 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.626 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:27.626 [2024-11-27 06:24:46.977741] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:27.626 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.626 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:22:27.626 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.627 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:27.627 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.627 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:27.627 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.627 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:27.627 Malloc0 00:22:27.627 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.627 06:24:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:27.627 06:24:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.627 06:24:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:27.627 06:24:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.627 06:24:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:27.627 06:24:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.627 06:24:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:27.627 [2024-11-27 06:24:47.012471] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:27.627 06:24:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.627 06:24:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=101201 00:22:27.627 06:24:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:22:27.627 06:24:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=101202 00:22:27.627 06:24:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:22:27.627 06:24:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=101203 00:22:27.627 06:24:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 101201 00:22:27.627 06:24:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:22:27.885 [2024-11-27 06:24:47.180686] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:27.885 [2024-11-27 06:24:47.190862] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:27.885 [2024-11-27 06:24:47.191198] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:28.821 Initializing NVMe Controllers 00:22:28.821 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:22:28.821 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:22:28.821 Initialization complete. Launching workers. 00:22:28.821 ======================================================== 00:22:28.821 Latency(us) 00:22:28.821 Device Information : IOPS MiB/s Average min max 00:22:28.821 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 4705.00 18.38 212.24 90.35 540.32 00:22:28.821 ======================================================== 00:22:28.821 Total : 4705.00 18.38 212.24 90.35 540.32 00:22:28.821 00:22:28.821 Initializing NVMe Controllers 00:22:28.821 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:22:28.821 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:22:28.821 Initialization complete. Launching workers. 00:22:28.821 ======================================================== 00:22:28.821 Latency(us) 00:22:28.821 Device Information : IOPS MiB/s Average min max 00:22:28.821 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 4678.00 18.27 213.55 88.67 628.24 00:22:28.821 ======================================================== 00:22:28.821 Total : 4678.00 18.27 213.55 88.67 628.24 00:22:28.821 00:22:28.821 Initializing NVMe Controllers 00:22:28.821 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:22:28.821 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:22:28.821 Initialization complete. Launching workers. 00:22:28.821 ======================================================== 00:22:28.821 Latency(us) 00:22:28.821 Device Information : IOPS MiB/s Average min max 00:22:28.821 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 4657.97 18.20 214.43 88.72 559.69 00:22:28.821 ======================================================== 00:22:28.821 Total : 4657.97 18.20 214.43 88.72 559.69 00:22:28.821 00:22:28.821 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 101202 00:22:28.821 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 101203 00:22:28.821 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:28.821 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:22:28.821 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:28.821 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:22:28.821 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:28.821 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:22:28.821 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:28.821 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:28.821 rmmod nvme_tcp 00:22:28.821 rmmod nvme_fabrics 00:22:28.821 rmmod nvme_keyring 00:22:28.821 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:28.821 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:22:28.821 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:22:28.821 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 101150 ']' 00:22:28.821 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 101150 00:22:28.821 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 101150 ']' 00:22:28.821 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 101150 00:22:28.821 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:22:28.821 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:28.821 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101150 00:22:29.079 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:29.079 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:29.079 killing process with pid 101150 00:22:29.080 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101150' 00:22:29.080 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 101150 00:22:29.080 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 101150 00:22:29.080 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:29.080 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:29.080 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:29.080 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:22:29.080 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:29.080 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:22:29.080 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:22:29.080 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:29.080 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:29.080 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:29.080 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:29.080 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:29.080 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:29.080 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:29.080 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:29.080 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:29.080 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:29.080 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:29.080 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:29.338 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:29.338 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:29.338 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:29.338 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:29.338 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:29.338 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:29.338 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.338 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:22:29.338 ************************************ 00:22:29.338 END TEST nvmf_control_msg_list 00:22:29.338 ************************************ 00:22:29.338 00:22:29.338 real 0m3.220s 00:22:29.338 user 0m5.428s 00:22:29.338 sys 0m1.115s 00:22:29.338 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:29.338 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:29.338 06:24:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:29.338 06:24:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:29.338 06:24:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:29.338 06:24:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:29.338 ************************************ 00:22:29.338 START TEST nvmf_wait_for_buf 00:22:29.338 ************************************ 00:22:29.339 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:29.339 * Looking for test storage... 00:22:29.339 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:29.339 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:29.339 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:29.339 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:22:29.339 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:29.339 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:29.339 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:29.339 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:29.339 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:22:29.339 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:22:29.339 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:22:29.339 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:22:29.339 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:22:29.339 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:22:29.339 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:22:29.339 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:29.339 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:22:29.339 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:22:29.339 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:29.339 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:29.339 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:22:29.339 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:22:29.339 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:29.339 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:22:29.339 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:29.339 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:22:29.339 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:22:29.339 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:29.339 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:29.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.598 --rc genhtml_branch_coverage=1 00:22:29.598 --rc genhtml_function_coverage=1 00:22:29.598 --rc genhtml_legend=1 00:22:29.598 --rc geninfo_all_blocks=1 00:22:29.598 --rc geninfo_unexecuted_blocks=1 00:22:29.598 00:22:29.598 ' 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:29.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.598 --rc genhtml_branch_coverage=1 00:22:29.598 --rc genhtml_function_coverage=1 00:22:29.598 --rc genhtml_legend=1 00:22:29.598 --rc geninfo_all_blocks=1 00:22:29.598 --rc geninfo_unexecuted_blocks=1 00:22:29.598 00:22:29.598 ' 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:29.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.598 --rc genhtml_branch_coverage=1 00:22:29.598 --rc genhtml_function_coverage=1 00:22:29.598 --rc genhtml_legend=1 00:22:29.598 --rc geninfo_all_blocks=1 00:22:29.598 --rc geninfo_unexecuted_blocks=1 00:22:29.598 00:22:29.598 ' 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:29.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.598 --rc genhtml_branch_coverage=1 00:22:29.598 --rc genhtml_function_coverage=1 00:22:29.598 --rc genhtml_legend=1 00:22:29.598 --rc geninfo_all_blocks=1 00:22:29.598 --rc geninfo_unexecuted_blocks=1 00:22:29.598 00:22:29.598 ' 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:29.598 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:29.598 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:29.599 Cannot find device "nvmf_init_br" 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:29.599 Cannot find device "nvmf_init_br2" 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:29.599 Cannot find device "nvmf_tgt_br" 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:29.599 Cannot find device "nvmf_tgt_br2" 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:29.599 Cannot find device "nvmf_init_br" 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:29.599 Cannot find device "nvmf_init_br2" 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:29.599 Cannot find device "nvmf_tgt_br" 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:29.599 Cannot find device "nvmf_tgt_br2" 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:29.599 Cannot find device "nvmf_br" 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:29.599 Cannot find device "nvmf_init_if" 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:29.599 Cannot find device "nvmf_init_if2" 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:29.599 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:29.599 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:22:29.599 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:29.599 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:29.599 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:29.599 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:29.599 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:29.599 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:29.599 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:29.599 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:29.599 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:29.599 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:29.599 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:29.599 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:29.599 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:29.599 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:29.599 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:29.599 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:29.599 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:29.599 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:29.599 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:29.599 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:29.599 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:29.599 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:29.599 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:29.599 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:29.861 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:29.861 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:29.861 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:29.861 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:29.861 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:29.861 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:29.861 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:29.861 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:29.861 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:29.861 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:29.861 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:22:29.861 00:22:29.861 --- 10.0.0.3 ping statistics --- 00:22:29.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.861 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:22:29.861 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:29.861 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:29.861 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:22:29.861 00:22:29.861 --- 10.0.0.4 ping statistics --- 00:22:29.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.861 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:22:29.861 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:29.861 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:29.861 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:22:29.861 00:22:29.861 --- 10.0.0.1 ping statistics --- 00:22:29.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.861 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:22:29.861 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:29.861 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:29.861 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:22:29.861 00:22:29.861 --- 10.0.0.2 ping statistics --- 00:22:29.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.861 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:22:29.861 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:29.861 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:22:29.861 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:29.861 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:29.861 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:29.861 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:29.861 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:29.861 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:29.861 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:29.861 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:22:29.861 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:29.861 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:29.861 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:29.861 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=101433 00:22:29.861 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 101433 00:22:29.861 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 101433 ']' 00:22:29.861 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.861 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:29.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.861 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.861 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:29.861 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:29.861 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:29.861 [2024-11-27 06:24:49.237448] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:22:29.861 [2024-11-27 06:24:49.237513] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.861 [2024-11-27 06:24:49.352334] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:29.861 [2024-11-27 06:24:49.379104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.122 [2024-11-27 06:24:49.397743] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:30.122 [2024-11-27 06:24:49.397786] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:30.122 [2024-11-27 06:24:49.397793] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:30.122 [2024-11-27 06:24:49.397797] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:30.122 [2024-11-27 06:24:49.397802] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:30.122 [2024-11-27 06:24:49.398066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.688 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:30.688 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:22:30.688 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:30.688 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:30.688 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:30.688 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:30.688 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:30.688 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:22:30.688 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:22:30.688 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.688 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:30.688 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.688 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:22:30.688 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.688 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:30.688 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.688 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:22:30.688 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.688 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:30.946 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.946 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:30.946 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.946 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:30.946 Malloc0 00:22:30.946 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.946 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:22:30.946 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.946 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:30.946 [2024-11-27 06:24:50.251238] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:30.946 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.946 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:22:30.946 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.946 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:30.946 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.946 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:30.946 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.946 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:30.946 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.946 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:30.946 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.946 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:30.946 [2024-11-27 06:24:50.275293] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:30.946 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.946 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:22:30.947 [2024-11-27 06:24:50.470993] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:32.420 Initializing NVMe Controllers 00:22:32.420 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:22:32.420 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:22:32.420 Initialization complete. Launching workers. 00:22:32.420 ======================================================== 00:22:32.420 Latency(us) 00:22:32.420 Device Information : IOPS MiB/s Average min max 00:22:32.420 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 127.62 15.95 32467.78 8018.23 64012.00 00:22:32.420 ======================================================== 00:22:32.420 Total : 127.62 15.95 32467.78 8018.23 64012.00 00:22:32.420 00:22:32.420 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:22:32.420 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.420 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:32.420 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:22:32.420 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.420 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2022 00:22:32.420 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2022 -eq 0 ]] 00:22:32.420 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:32.420 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:22:32.420 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:32.420 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:22:32.420 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:32.420 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:22:32.420 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:32.420 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:32.420 rmmod nvme_tcp 00:22:32.420 rmmod nvme_fabrics 00:22:32.420 rmmod nvme_keyring 00:22:32.678 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:32.678 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:22:32.678 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:22:32.678 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 101433 ']' 00:22:32.678 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 101433 00:22:32.678 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 101433 ']' 00:22:32.678 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 101433 00:22:32.678 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:22:32.678 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:32.678 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101433 00:22:32.678 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:32.678 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:32.678 killing process with pid 101433 00:22:32.678 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101433' 00:22:32.678 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 101433 00:22:32.678 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 101433 00:22:32.678 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:32.678 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:32.678 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:32.678 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:22:32.678 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:22:32.678 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:32.678 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:22:32.678 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:32.678 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:32.678 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:32.678 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:32.678 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:32.678 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:32.678 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:32.678 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:32.678 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:32.678 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:32.678 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:32.937 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:32.937 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:32.937 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:32.937 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:32.937 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:32.937 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.937 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:32.937 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.937 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:22:32.937 00:22:32.937 real 0m3.606s 00:22:32.937 user 0m3.236s 00:22:32.937 sys 0m0.599s 00:22:32.937 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:32.937 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:32.937 ************************************ 00:22:32.937 END TEST nvmf_wait_for_buf 00:22:32.937 ************************************ 00:22:32.937 06:24:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:22:32.937 06:24:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:22:32.937 06:24:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:32.937 06:24:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:32.937 06:24:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:32.937 ************************************ 00:22:32.937 START TEST nvmf_fuzz 00:22:32.937 ************************************ 00:22:32.937 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:22:32.937 * Looking for test storage... 00:22:32.937 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:32.937 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:32.937 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:22:32.937 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:33.196 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:33.196 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:33.196 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:33.196 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:33.196 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:22:33.196 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:22:33.196 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:22:33.196 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:22:33.196 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:22:33.196 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:22:33.196 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:22:33.196 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:33.196 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:22:33.196 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:22:33.196 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:33.196 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:33.196 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:22:33.196 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:22:33.196 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:33.196 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:22:33.196 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:22:33.196 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:22:33.196 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:22:33.196 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:33.196 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:22:33.196 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:22:33.196 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:33.196 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:33.196 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:22:33.196 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:33.196 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:33.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.196 --rc genhtml_branch_coverage=1 00:22:33.196 --rc genhtml_function_coverage=1 00:22:33.196 --rc genhtml_legend=1 00:22:33.196 --rc geninfo_all_blocks=1 00:22:33.196 --rc geninfo_unexecuted_blocks=1 00:22:33.196 00:22:33.196 ' 00:22:33.196 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:33.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.196 --rc genhtml_branch_coverage=1 00:22:33.196 --rc genhtml_function_coverage=1 00:22:33.196 --rc genhtml_legend=1 00:22:33.196 --rc geninfo_all_blocks=1 00:22:33.196 --rc geninfo_unexecuted_blocks=1 00:22:33.196 00:22:33.196 ' 00:22:33.196 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:33.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.196 --rc genhtml_branch_coverage=1 00:22:33.196 --rc genhtml_function_coverage=1 00:22:33.196 --rc genhtml_legend=1 00:22:33.196 --rc geninfo_all_blocks=1 00:22:33.196 --rc geninfo_unexecuted_blocks=1 00:22:33.196 00:22:33.196 ' 00:22:33.196 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:33.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.196 --rc genhtml_branch_coverage=1 00:22:33.196 --rc genhtml_function_coverage=1 00:22:33.196 --rc genhtml_legend=1 00:22:33.196 --rc geninfo_all_blocks=1 00:22:33.196 --rc geninfo_unexecuted_blocks=1 00:22:33.196 00:22:33.196 ' 00:22:33.196 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:33.196 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:22:33.196 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:33.197 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:33.197 Cannot find device "nvmf_init_br" 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:33.197 Cannot find device "nvmf_init_br2" 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:33.197 Cannot find device "nvmf_tgt_br" 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # true 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:33.197 Cannot find device "nvmf_tgt_br2" 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # true 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:33.197 Cannot find device "nvmf_init_br" 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # true 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:33.197 Cannot find device "nvmf_init_br2" 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # true 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:33.197 Cannot find device "nvmf_tgt_br" 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # true 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:33.197 Cannot find device "nvmf_tgt_br2" 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # true 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:33.197 Cannot find device "nvmf_br" 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # true 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:33.197 Cannot find device "nvmf_init_if" 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # true 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:33.197 Cannot find device "nvmf_init_if2" 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # true 00:22:33.197 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:33.198 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:33.198 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # true 00:22:33.198 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:33.198 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:33.198 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # true 00:22:33.198 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:33.198 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:33.198 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:33.198 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:33.198 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:33.198 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:33.198 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:33.456 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:33.456 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:33.456 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:33.456 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:33.456 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:33.456 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:33.457 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:33.457 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:22:33.457 00:22:33.457 --- 10.0.0.3 ping statistics --- 00:22:33.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.457 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:33.457 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:33.457 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 00:22:33.457 00:22:33.457 --- 10.0.0.4 ping statistics --- 00:22:33.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.457 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:33.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:33.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:22:33.457 00:22:33.457 --- 10.0.0.1 ping statistics --- 00:22:33.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.457 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:33.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:33.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:22:33.457 00:22:33.457 --- 10.0.0.2 ping statistics --- 00:22:33.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.457 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@461 -- # return 0 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=101721 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 101721 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 101721 ']' 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:33.457 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:34.390 06:24:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:34.390 06:24:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:22:34.390 06:24:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:34.390 06:24:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.390 06:24:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:34.390 06:24:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.390 06:24:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:22:34.390 06:24:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.390 06:24:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:34.390 Malloc0 00:22:34.390 06:24:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.390 06:24:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:34.390 06:24:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.390 06:24:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:34.390 06:24:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.390 06:24:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:34.390 06:24:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.390 06:24:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:34.390 06:24:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.390 06:24:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:34.390 06:24:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.390 06:24:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:34.390 06:24:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.390 06:24:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' 00:22:34.390 06:24:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -N -a 00:22:34.648 Shutting down the fuzz application 00:22:34.648 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:22:34.907 Shutting down the fuzz application 00:22:34.907 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:34.907 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.907 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:34.907 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.907 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:22:34.907 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:22:34.907 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:34.907 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:22:34.907 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:34.907 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:22:34.907 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:34.907 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:34.907 rmmod nvme_tcp 00:22:34.907 rmmod nvme_fabrics 00:22:34.907 rmmod nvme_keyring 00:22:34.907 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:34.907 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:22:34.907 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:22:34.907 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 101721 ']' 00:22:34.907 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 101721 00:22:34.907 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 101721 ']' 00:22:34.907 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 101721 00:22:34.908 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:22:34.908 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:34.908 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101721 00:22:34.908 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:34.908 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:34.908 killing process with pid 101721 00:22:34.908 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101721' 00:22:34.908 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 101721 00:22:34.908 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 101721 00:22:35.166 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:35.166 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:35.166 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:35.166 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:22:35.166 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:22:35.166 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:22:35.166 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:35.166 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:35.166 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:35.166 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:35.166 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:35.166 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:35.166 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:35.166 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:35.166 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:35.166 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:35.166 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:35.166 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:35.166 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:35.166 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:35.424 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:35.424 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:35.424 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:35.424 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.424 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.424 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.424 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@300 -- # return 0 00:22:35.424 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:22:35.424 ************************************ 00:22:35.424 END TEST nvmf_fuzz 00:22:35.424 ************************************ 00:22:35.424 00:22:35.424 real 0m2.410s 00:22:35.424 user 0m2.212s 00:22:35.424 sys 0m0.549s 00:22:35.424 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:35.424 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:35.424 06:24:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:22:35.424 06:24:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:35.424 06:24:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:35.424 06:24:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:35.424 ************************************ 00:22:35.424 START TEST nvmf_multiconnection 00:22:35.424 ************************************ 00:22:35.424 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:22:35.424 * Looking for test storage... 00:22:35.424 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:35.424 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:35.424 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lcov --version 00:22:35.424 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:35.683 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:35.683 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:35.683 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:35.683 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:35.683 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:22:35.683 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:22:35.683 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:22:35.683 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:22:35.683 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:22:35.683 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:22:35.683 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:22:35.683 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:35.683 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:22:35.683 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:22:35.683 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:35.683 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:35.683 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:22:35.683 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:22:35.683 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:35.683 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:22:35.683 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:22:35.683 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:22:35.683 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:22:35.683 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:35.683 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:22:35.683 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:22:35.683 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:35.683 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:35.683 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:22:35.683 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:35.683 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:35.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.683 --rc genhtml_branch_coverage=1 00:22:35.683 --rc genhtml_function_coverage=1 00:22:35.683 --rc genhtml_legend=1 00:22:35.683 --rc geninfo_all_blocks=1 00:22:35.683 --rc geninfo_unexecuted_blocks=1 00:22:35.683 00:22:35.683 ' 00:22:35.683 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:35.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.683 --rc genhtml_branch_coverage=1 00:22:35.683 --rc genhtml_function_coverage=1 00:22:35.683 --rc genhtml_legend=1 00:22:35.683 --rc geninfo_all_blocks=1 00:22:35.684 --rc geninfo_unexecuted_blocks=1 00:22:35.684 00:22:35.684 ' 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:35.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.684 --rc genhtml_branch_coverage=1 00:22:35.684 --rc genhtml_function_coverage=1 00:22:35.684 --rc genhtml_legend=1 00:22:35.684 --rc geninfo_all_blocks=1 00:22:35.684 --rc geninfo_unexecuted_blocks=1 00:22:35.684 00:22:35.684 ' 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:35.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.684 --rc genhtml_branch_coverage=1 00:22:35.684 --rc genhtml_function_coverage=1 00:22:35.684 --rc genhtml_legend=1 00:22:35.684 --rc geninfo_all_blocks=1 00:22:35.684 --rc geninfo_unexecuted_blocks=1 00:22:35.684 00:22:35.684 ' 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:35.684 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:35.684 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:35.684 Cannot find device "nvmf_init_br" 00:22:35.684 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:22:35.684 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:35.684 Cannot find device "nvmf_init_br2" 00:22:35.684 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:35.685 Cannot find device "nvmf_tgt_br" 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # true 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:35.685 Cannot find device "nvmf_tgt_br2" 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # true 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:35.685 Cannot find device "nvmf_init_br" 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # true 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:35.685 Cannot find device "nvmf_init_br2" 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # true 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:35.685 Cannot find device "nvmf_tgt_br" 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # true 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:35.685 Cannot find device "nvmf_tgt_br2" 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # true 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:35.685 Cannot find device "nvmf_br" 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # true 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:35.685 Cannot find device "nvmf_init_if" 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # true 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:35.685 Cannot find device "nvmf_init_if2" 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # true 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:35.685 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # true 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:35.685 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # true 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:35.685 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:35.943 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:35.943 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:35.943 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:35.943 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:35.943 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:35.943 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:35.943 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:35.943 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:35.943 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:35.943 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:22:35.943 00:22:35.943 --- 10.0.0.3 ping statistics --- 00:22:35.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.943 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:22:35.943 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:35.943 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:35.943 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:22:35.943 00:22:35.943 --- 10.0.0.4 ping statistics --- 00:22:35.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.943 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:22:35.943 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:35.943 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:35.943 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:22:35.943 00:22:35.943 --- 10.0.0.1 ping statistics --- 00:22:35.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.943 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:22:35.943 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:35.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:35.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.032 ms 00:22:35.943 00:22:35.943 --- 10.0.0.2 ping statistics --- 00:22:35.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.943 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:22:35.943 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:35.943 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@461 -- # return 0 00:22:35.943 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:35.943 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:35.943 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:35.943 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:35.943 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:35.943 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:35.943 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:35.943 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:22:35.943 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:35.944 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:35.944 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:35.944 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=101975 00:22:35.944 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 101975 00:22:35.944 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 101975 ']' 00:22:35.944 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.944 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:35.944 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:35.944 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.944 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:35.944 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:35.944 [2024-11-27 06:24:55.309712] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:22:35.944 [2024-11-27 06:24:55.309782] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:35.944 [2024-11-27 06:24:55.425869] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:35.944 [2024-11-27 06:24:55.446761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:35.944 [2024-11-27 06:24:55.466897] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:35.944 [2024-11-27 06:24:55.466946] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:35.944 [2024-11-27 06:24:55.466952] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:35.944 [2024-11-27 06:24:55.466957] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:35.944 [2024-11-27 06:24:55.466961] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:35.944 [2024-11-27 06:24:55.467783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:35.944 [2024-11-27 06:24:55.468091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:35.944 [2024-11-27 06:24:55.468611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:35.944 [2024-11-27 06:24:55.468612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.203 [2024-11-27 06:24:55.593511] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.203 Malloc1 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.203 [2024-11-27 06:24:55.649295] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.203 Malloc2 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.203 Malloc3 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.203 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.204 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:36.204 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:22:36.204 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.204 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.474 Malloc4 00:22:36.474 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.474 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:22:36.474 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.474 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.474 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.474 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:22:36.474 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.474 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.474 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.474 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:22:36.474 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.474 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.474 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.474 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:36.474 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:22:36.474 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.474 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.474 Malloc5 00:22:36.474 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.3 -s 4420 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.475 Malloc6 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.3 -s 4420 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.475 Malloc7 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.3 -s 4420 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.475 Malloc8 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.3 -s 4420 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.475 Malloc9 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.475 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:22:36.476 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.476 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.476 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.476 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:22:36.476 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.476 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.476 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.476 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.3 -s 4420 00:22:36.476 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.476 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.476 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.476 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:36.476 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:22:36.476 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.476 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.476 Malloc10 00:22:36.476 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.476 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:22:36.476 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.476 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.762 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.762 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:22:36.762 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.762 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.762 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.762 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.3 -s 4420 00:22:36.762 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.762 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.762 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.762 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:36.762 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:22:36.762 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.762 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.762 Malloc11 00:22:36.762 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.762 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:22:36.762 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.762 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.762 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.762 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:22:36.762 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.762 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.762 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.762 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.3 -s 4420 00:22:36.762 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.762 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:36.762 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.762 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:22:36.762 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:36.762 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:22:36.762 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:22:36.762 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:22:36.762 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:22:36.762 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:22:36.762 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:22:39.290 06:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:22:39.290 06:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:22:39.290 06:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:22:39.290 06:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:22:39.290 06:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:22:39.290 06:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:22:39.290 06:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:39.290 06:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.3 -s 4420 00:22:39.290 06:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:22:39.290 06:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:22:39.290 06:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:22:39.290 06:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:22:39.290 06:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:22:41.195 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:22:41.195 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:22:41.195 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:22:41.195 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:22:41.195 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:22:41.195 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:22:41.195 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:41.195 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.3 -s 4420 00:22:41.195 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:22:41.195 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:22:41.195 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:22:41.195 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:22:41.195 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:22:43.128 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:22:43.128 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:22:43.128 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:22:43.128 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:22:43.128 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:22:43.128 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:22:43.128 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:43.128 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.3 -s 4420 00:22:43.386 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:22:43.386 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:22:43.386 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:22:43.386 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:22:43.386 06:25:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:22:45.289 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:22:45.289 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:22:45.289 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:22:45.289 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:22:45.289 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:22:45.289 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:22:45.289 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:45.289 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.3 -s 4420 00:22:45.548 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:22:45.548 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:22:45.548 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:22:45.548 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:22:45.548 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:22:47.447 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:22:47.447 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:22:47.447 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:22:47.705 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:22:47.705 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:22:47.705 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:22:47.705 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:47.705 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.3 -s 4420 00:22:47.705 06:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:22:47.705 06:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:22:47.705 06:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:22:47.706 06:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:22:47.706 06:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:22:50.244 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:22:50.244 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:22:50.244 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:22:50.244 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:22:50.244 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:22:50.244 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:22:50.244 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:50.244 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.3 -s 4420 00:22:50.244 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:22:50.244 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:22:50.244 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:22:50.244 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:22:50.244 06:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:22:52.146 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:22:52.146 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:22:52.146 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:22:52.146 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:22:52.146 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:22:52.146 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:22:52.146 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:52.146 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.3 -s 4420 00:22:52.146 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:22:52.146 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:22:52.146 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:22:52.146 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:22:52.146 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:22:54.048 06:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:22:54.048 06:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:22:54.048 06:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:22:54.048 06:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:22:54.048 06:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:22:54.048 06:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:22:54.048 06:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:54.048 06:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.3 -s 4420 00:22:54.305 06:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:22:54.305 06:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:22:54.305 06:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:22:54.305 06:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:22:54.305 06:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:22:56.203 06:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:22:56.203 06:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:22:56.203 06:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:22:56.460 06:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:22:56.460 06:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:22:56.460 06:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:22:56.460 06:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:56.460 06:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.3 -s 4420 00:22:56.460 06:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:22:56.460 06:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:22:56.460 06:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:22:56.460 06:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:22:56.460 06:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:22:59.001 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:22:59.001 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:22:59.001 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:22:59.001 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:22:59.002 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:22:59.002 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:22:59.002 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:59.002 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.3 -s 4420 00:22:59.002 06:25:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:22:59.002 06:25:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:22:59.002 06:25:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:22:59.002 06:25:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:22:59.002 06:25:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:23:00.907 06:25:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:23:00.907 06:25:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:23:00.907 06:25:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:23:00.907 06:25:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:23:00.907 06:25:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:23:00.907 06:25:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:23:00.907 06:25:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:23:00.907 [global] 00:23:00.907 thread=1 00:23:00.907 invalidate=1 00:23:00.907 rw=read 00:23:00.907 time_based=1 00:23:00.907 runtime=10 00:23:00.907 ioengine=libaio 00:23:00.907 direct=1 00:23:00.907 bs=262144 00:23:00.907 iodepth=64 00:23:00.907 norandommap=1 00:23:00.907 numjobs=1 00:23:00.907 00:23:00.907 [job0] 00:23:00.907 filename=/dev/nvme0n1 00:23:00.907 [job1] 00:23:00.907 filename=/dev/nvme10n1 00:23:00.907 [job2] 00:23:00.907 filename=/dev/nvme1n1 00:23:00.907 [job3] 00:23:00.907 filename=/dev/nvme2n1 00:23:00.907 [job4] 00:23:00.907 filename=/dev/nvme3n1 00:23:00.907 [job5] 00:23:00.907 filename=/dev/nvme4n1 00:23:00.907 [job6] 00:23:00.907 filename=/dev/nvme5n1 00:23:00.907 [job7] 00:23:00.907 filename=/dev/nvme6n1 00:23:00.907 [job8] 00:23:00.907 filename=/dev/nvme7n1 00:23:00.907 [job9] 00:23:00.907 filename=/dev/nvme8n1 00:23:00.907 [job10] 00:23:00.907 filename=/dev/nvme9n1 00:23:00.908 Could not set queue depth (nvme0n1) 00:23:00.908 Could not set queue depth (nvme10n1) 00:23:00.908 Could not set queue depth (nvme1n1) 00:23:00.908 Could not set queue depth (nvme2n1) 00:23:00.908 Could not set queue depth (nvme3n1) 00:23:00.908 Could not set queue depth (nvme4n1) 00:23:00.908 Could not set queue depth (nvme5n1) 00:23:00.908 Could not set queue depth (nvme6n1) 00:23:00.908 Could not set queue depth (nvme7n1) 00:23:00.908 Could not set queue depth (nvme8n1) 00:23:00.908 Could not set queue depth (nvme9n1) 00:23:00.908 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:00.908 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:00.908 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:00.908 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:00.908 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:00.908 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:00.908 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:00.908 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:00.908 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:00.908 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:00.908 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:00.908 fio-3.35 00:23:00.908 Starting 11 threads 00:23:13.162 00:23:13.162 job0: (groupid=0, jobs=1): err= 0: pid=102441: Wed Nov 27 06:25:30 2024 00:23:13.162 read: IOPS=144, BW=36.1MiB/s (37.8MB/s)(367MiB/10158msec) 00:23:13.162 slat (usec): min=14, max=301476, avg=6846.15, stdev=29991.88 00:23:13.162 clat (msec): min=117, max=711, avg=435.93, stdev=60.66 00:23:13.162 lat (msec): min=207, max=738, avg=442.77, stdev=66.20 00:23:13.162 clat percentiles (msec): 00:23:13.162 | 1.00th=[ 249], 5.00th=[ 334], 10.00th=[ 363], 20.00th=[ 388], 00:23:13.162 | 30.00th=[ 414], 40.00th=[ 422], 50.00th=[ 439], 60.00th=[ 451], 00:23:13.162 | 70.00th=[ 472], 80.00th=[ 485], 90.00th=[ 502], 95.00th=[ 518], 00:23:13.162 | 99.00th=[ 584], 99.50th=[ 584], 99.90th=[ 709], 99.95th=[ 709], 00:23:13.162 | 99.99th=[ 709] 00:23:13.162 bw ( KiB/s): min=31744, max=47616, per=3.17%, avg=35895.15, stdev=4837.27, samples=20 00:23:13.162 iops : min= 124, max= 186, avg=140.20, stdev=18.88, samples=20 00:23:13.162 lat (msec) : 250=1.02%, 500=87.65%, 750=11.32% 00:23:13.162 cpu : usr=0.05%, sys=0.40%, ctx=291, majf=0, minf=4097 00:23:13.162 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.2%, >=64=95.7% 00:23:13.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.162 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:13.162 issued rwts: total=1466,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.162 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:13.162 job1: (groupid=0, jobs=1): err= 0: pid=102442: Wed Nov 27 06:25:30 2024 00:23:13.162 read: IOPS=133, BW=33.3MiB/s (34.9MB/s)(338MiB/10165msec) 00:23:13.162 slat (usec): min=13, max=340726, avg=7423.48, stdev=26709.70 00:23:13.162 clat (msec): min=14, max=774, avg=472.74, stdev=122.00 00:23:13.162 lat (msec): min=14, max=847, avg=480.16, stdev=125.58 00:23:13.162 clat percentiles (msec): 00:23:13.162 | 1.00th=[ 19], 5.00th=[ 127], 10.00th=[ 422], 20.00th=[ 464], 00:23:13.162 | 30.00th=[ 477], 40.00th=[ 485], 50.00th=[ 498], 60.00th=[ 514], 00:23:13.162 | 70.00th=[ 523], 80.00th=[ 542], 90.00th=[ 558], 95.00th=[ 584], 00:23:13.162 | 99.00th=[ 617], 99.50th=[ 642], 99.90th=[ 743], 99.95th=[ 776], 00:23:13.162 | 99.99th=[ 776] 00:23:13.162 bw ( KiB/s): min=28160, max=45056, per=2.92%, avg=32995.35, stdev=4049.01, samples=20 00:23:13.162 iops : min= 110, max= 176, avg=128.85, stdev=15.85, samples=20 00:23:13.162 lat (msec) : 20=1.48%, 50=1.18%, 100=1.77%, 250=3.25%, 500=42.65% 00:23:13.162 lat (msec) : 750=49.59%, 1000=0.07% 00:23:13.162 cpu : usr=0.01%, sys=0.41%, ctx=313, majf=0, minf=4097 00:23:13.162 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.3% 00:23:13.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.162 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:13.162 issued rwts: total=1353,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.162 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:13.163 job2: (groupid=0, jobs=1): err= 0: pid=102443: Wed Nov 27 06:25:30 2024 00:23:13.163 read: IOPS=90, BW=22.5MiB/s (23.6MB/s)(229MiB/10161msec) 00:23:13.163 slat (usec): min=9, max=541126, avg=11047.86, stdev=57748.79 00:23:13.163 clat (msec): min=18, max=1887, avg=698.84, stdev=346.12 00:23:13.163 lat (msec): min=18, max=1887, avg=709.88, stdev=351.29 00:23:13.163 clat percentiles (msec): 00:23:13.163 | 1.00th=[ 36], 5.00th=[ 288], 10.00th=[ 376], 20.00th=[ 451], 00:23:13.163 | 30.00th=[ 502], 40.00th=[ 514], 50.00th=[ 550], 60.00th=[ 625], 00:23:13.163 | 70.00th=[ 793], 80.00th=[ 961], 90.00th=[ 1250], 95.00th=[ 1368], 00:23:13.163 | 99.00th=[ 1552], 99.50th=[ 1888], 99.90th=[ 1888], 99.95th=[ 1888], 00:23:13.163 | 99.99th=[ 1888] 00:23:13.163 bw ( KiB/s): min= 7168, max=48128, per=1.93%, avg=21782.70, stdev=11161.82, samples=20 00:23:13.163 iops : min= 28, max= 188, avg=85.05, stdev=43.58, samples=20 00:23:13.163 lat (msec) : 20=0.44%, 50=0.87%, 250=2.95%, 500=30.60%, 750=31.48% 00:23:13.163 lat (msec) : 1000=13.88%, 2000=19.78% 00:23:13.163 cpu : usr=0.03%, sys=0.35%, ctx=69, majf=0, minf=4097 00:23:13.163 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.5%, >=64=93.1% 00:23:13.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.163 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:13.163 issued rwts: total=915,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.163 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:13.163 job3: (groupid=0, jobs=1): err= 0: pid=102444: Wed Nov 27 06:25:30 2024 00:23:13.163 read: IOPS=137, BW=34.3MiB/s (36.0MB/s)(349MiB/10160msec) 00:23:13.163 slat (usec): min=12, max=340448, avg=7192.59, stdev=33936.12 00:23:13.163 clat (msec): min=17, max=786, avg=458.23, stdev=111.39 00:23:13.163 lat (msec): min=17, max=950, avg=465.42, stdev=116.14 00:23:13.163 clat percentiles (msec): 00:23:13.163 | 1.00th=[ 60], 5.00th=[ 201], 10.00th=[ 380], 20.00th=[ 414], 00:23:13.163 | 30.00th=[ 435], 40.00th=[ 443], 50.00th=[ 472], 60.00th=[ 485], 00:23:13.163 | 70.00th=[ 506], 80.00th=[ 542], 90.00th=[ 567], 95.00th=[ 592], 00:23:13.163 | 99.00th=[ 651], 99.50th=[ 651], 99.90th=[ 785], 99.95th=[ 785], 00:23:13.163 | 99.99th=[ 785] 00:23:13.163 bw ( KiB/s): min=21504, max=54784, per=3.01%, avg=34096.00, stdev=6286.04, samples=20 00:23:13.163 iops : min= 84, max= 214, avg=133.15, stdev=24.57, samples=20 00:23:13.163 lat (msec) : 20=0.21%, 50=0.57%, 100=3.37%, 250=1.00%, 500=61.53% 00:23:13.163 lat (msec) : 750=33.09%, 1000=0.21% 00:23:13.163 cpu : usr=0.02%, sys=0.51%, ctx=182, majf=0, minf=4098 00:23:13.163 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.5% 00:23:13.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.163 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:13.163 issued rwts: total=1396,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.163 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:13.163 job4: (groupid=0, jobs=1): err= 0: pid=102445: Wed Nov 27 06:25:30 2024 00:23:13.163 read: IOPS=134, BW=33.7MiB/s (35.3MB/s)(343MiB/10163msec) 00:23:13.163 slat (usec): min=11, max=303630, avg=7332.92, stdev=28320.94 00:23:13.163 clat (msec): min=15, max=735, avg=466.65, stdev=108.53 00:23:13.163 lat (msec): min=16, max=762, avg=473.98, stdev=112.85 00:23:13.163 clat percentiles (msec): 00:23:13.163 | 1.00th=[ 18], 5.00th=[ 218], 10.00th=[ 422], 20.00th=[ 456], 00:23:13.163 | 30.00th=[ 472], 40.00th=[ 481], 50.00th=[ 489], 60.00th=[ 502], 00:23:13.163 | 70.00th=[ 510], 80.00th=[ 527], 90.00th=[ 542], 95.00th=[ 558], 00:23:13.163 | 99.00th=[ 609], 99.50th=[ 625], 99.90th=[ 735], 99.95th=[ 735], 00:23:13.163 | 99.99th=[ 735] 00:23:13.163 bw ( KiB/s): min=25088, max=48224, per=2.96%, avg=33435.20, stdev=5081.91, samples=20 00:23:13.163 iops : min= 98, max= 188, avg=130.55, stdev=19.81, samples=20 00:23:13.163 lat (msec) : 20=2.19%, 50=0.66%, 100=1.02%, 250=1.53%, 500=56.13% 00:23:13.163 lat (msec) : 750=38.47% 00:23:13.163 cpu : usr=0.03%, sys=0.48%, ctx=276, majf=0, minf=4097 00:23:13.163 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.3%, >=64=95.4% 00:23:13.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.163 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:13.163 issued rwts: total=1370,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.163 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:13.163 job5: (groupid=0, jobs=1): err= 0: pid=102446: Wed Nov 27 06:25:30 2024 00:23:13.163 read: IOPS=141, BW=35.4MiB/s (37.1MB/s)(360MiB/10162msec) 00:23:13.163 slat (usec): min=12, max=341456, avg=6984.09, stdev=29248.33 00:23:13.163 clat (msec): min=35, max=676, avg=444.68, stdev=102.34 00:23:13.163 lat (msec): min=35, max=854, avg=451.67, stdev=106.43 00:23:13.163 clat percentiles (msec): 00:23:13.163 | 1.00th=[ 61], 5.00th=[ 190], 10.00th=[ 351], 20.00th=[ 405], 00:23:13.163 | 30.00th=[ 426], 40.00th=[ 439], 50.00th=[ 464], 60.00th=[ 481], 00:23:13.163 | 70.00th=[ 493], 80.00th=[ 514], 90.00th=[ 542], 95.00th=[ 575], 00:23:13.163 | 99.00th=[ 592], 99.50th=[ 642], 99.90th=[ 651], 99.95th=[ 676], 00:23:13.163 | 99.99th=[ 676] 00:23:13.163 bw ( KiB/s): min=25037, max=46080, per=3.11%, avg=35171.85, stdev=5514.58, samples=20 00:23:13.163 iops : min= 97, max= 180, avg=137.35, stdev=21.62, samples=20 00:23:13.163 lat (msec) : 50=0.63%, 100=2.16%, 250=3.55%, 500=67.73%, 750=25.94% 00:23:13.163 cpu : usr=0.01%, sys=0.46%, ctx=266, majf=0, minf=4097 00:23:13.163 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.2%, >=64=95.6% 00:23:13.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.163 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:13.163 issued rwts: total=1438,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.163 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:13.163 job6: (groupid=0, jobs=1): err= 0: pid=102447: Wed Nov 27 06:25:30 2024 00:23:13.163 read: IOPS=2206, BW=552MiB/s (578MB/s)(5527MiB/10021msec) 00:23:13.163 slat (usec): min=11, max=55468, avg=450.31, stdev=2045.14 00:23:13.163 clat (msec): min=10, max=186, avg=28.54, stdev= 8.63 00:23:13.163 lat (msec): min=11, max=187, avg=28.99, stdev= 8.77 00:23:13.163 clat percentiles (msec): 00:23:13.163 | 1.00th=[ 20], 5.00th=[ 24], 10.00th=[ 26], 20.00th=[ 27], 00:23:13.163 | 30.00th=[ 27], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 29], 00:23:13.163 | 70.00th=[ 29], 80.00th=[ 30], 90.00th=[ 32], 95.00th=[ 33], 00:23:13.163 | 99.00th=[ 81], 99.50th=[ 100], 99.90th=[ 140], 99.95th=[ 140], 00:23:13.163 | 99.99th=[ 161] 00:23:13.163 bw ( KiB/s): min=225218, max=617472, per=49.88%, avg=564165.50, stdev=82576.08, samples=20 00:23:13.163 iops : min= 879, max= 2412, avg=2203.65, stdev=322.70, samples=20 00:23:13.163 lat (msec) : 20=1.44%, 50=97.34%, 100=0.78%, 250=0.43% 00:23:13.163 cpu : usr=0.29%, sys=4.79%, ctx=2500, majf=0, minf=4097 00:23:13.163 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:23:13.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:13.163 issued rwts: total=22109,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.163 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:13.163 job7: (groupid=0, jobs=1): err= 0: pid=102448: Wed Nov 27 06:25:30 2024 00:23:13.163 read: IOPS=147, BW=37.0MiB/s (38.8MB/s)(376MiB/10152msec) 00:23:13.163 slat (usec): min=9, max=356922, avg=6614.84, stdev=29804.54 00:23:13.163 clat (msec): min=12, max=845, avg=425.44, stdev=155.63 00:23:13.163 lat (msec): min=13, max=845, avg=432.06, stdev=160.23 00:23:13.163 clat percentiles (msec): 00:23:13.163 | 1.00th=[ 32], 5.00th=[ 96], 10.00th=[ 109], 20.00th=[ 430], 00:23:13.163 | 30.00th=[ 460], 40.00th=[ 472], 50.00th=[ 481], 60.00th=[ 493], 00:23:13.163 | 70.00th=[ 502], 80.00th=[ 518], 90.00th=[ 535], 95.00th=[ 558], 00:23:13.163 | 99.00th=[ 600], 99.50th=[ 659], 99.90th=[ 844], 99.95th=[ 844], 00:23:13.163 | 99.99th=[ 844] 00:23:13.163 bw ( KiB/s): min=23040, max=131584, per=3.25%, avg=36800.05, stdev=22488.68, samples=20 00:23:13.163 iops : min= 90, max= 514, avg=143.60, stdev=87.88, samples=20 00:23:13.163 lat (msec) : 20=0.80%, 50=2.06%, 100=3.26%, 250=11.25%, 500=51.40% 00:23:13.163 lat (msec) : 750=31.03%, 1000=0.20% 00:23:13.163 cpu : usr=0.03%, sys=0.36%, ctx=360, majf=0, minf=4097 00:23:13.163 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.1%, >=64=95.8% 00:23:13.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.163 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:13.163 issued rwts: total=1502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.163 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:13.163 job8: (groupid=0, jobs=1): err= 0: pid=102449: Wed Nov 27 06:25:30 2024 00:23:13.163 read: IOPS=1077, BW=269MiB/s (282MB/s)(2702MiB/10032msec) 00:23:13.163 slat (usec): min=11, max=240949, avg=899.76, stdev=3018.11 00:23:13.163 clat (usec): min=924, max=496742, avg=58482.05, stdev=24141.80 00:23:13.163 lat (usec): min=969, max=496785, avg=59381.81, stdev=24476.36 00:23:13.163 clat percentiles (msec): 00:23:13.163 | 1.00th=[ 12], 5.00th=[ 45], 10.00th=[ 47], 20.00th=[ 50], 00:23:13.163 | 30.00th=[ 52], 40.00th=[ 53], 50.00th=[ 55], 60.00th=[ 58], 00:23:13.163 | 70.00th=[ 64], 80.00th=[ 68], 90.00th=[ 72], 95.00th=[ 75], 00:23:13.163 | 99.00th=[ 84], 99.50th=[ 259], 99.90th=[ 351], 99.95th=[ 355], 00:23:13.163 | 99.99th=[ 472] 00:23:13.163 bw ( KiB/s): min=216576, max=327168, per=24.32%, avg=274998.20, stdev=39576.26, samples=20 00:23:13.163 iops : min= 846, max= 1278, avg=1074.10, stdev=154.60, samples=20 00:23:13.163 lat (usec) : 1000=0.02% 00:23:13.163 lat (msec) : 2=0.02%, 4=0.11%, 10=0.48%, 20=1.21%, 50=20.88% 00:23:13.163 lat (msec) : 100=76.68%, 500=0.59% 00:23:13.163 cpu : usr=0.23%, sys=3.27%, ctx=3604, majf=0, minf=4097 00:23:13.163 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:23:13.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:13.163 issued rwts: total=10807,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.163 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:13.163 job9: (groupid=0, jobs=1): err= 0: pid=102450: Wed Nov 27 06:25:30 2024 00:23:13.163 read: IOPS=131, BW=32.8MiB/s (34.4MB/s)(333MiB/10152msec) 00:23:13.163 slat (usec): min=11, max=268534, avg=7531.61, stdev=28637.60 00:23:13.163 clat (msec): min=17, max=672, avg=479.79, stdev=88.12 00:23:13.163 lat (msec): min=17, max=817, avg=487.32, stdev=93.70 00:23:13.163 clat percentiles (msec): 00:23:13.163 | 1.00th=[ 171], 5.00th=[ 230], 10.00th=[ 439], 20.00th=[ 468], 00:23:13.163 | 30.00th=[ 472], 40.00th=[ 485], 50.00th=[ 498], 60.00th=[ 510], 00:23:13.164 | 70.00th=[ 518], 80.00th=[ 527], 90.00th=[ 542], 95.00th=[ 567], 00:23:13.164 | 99.00th=[ 600], 99.50th=[ 609], 99.90th=[ 651], 99.95th=[ 676], 00:23:13.164 | 99.99th=[ 676] 00:23:13.164 bw ( KiB/s): min=28672, max=38834, per=2.87%, avg=32447.30, stdev=2567.66, samples=20 00:23:13.164 iops : min= 112, max= 151, avg=126.60, stdev= 9.98, samples=20 00:23:13.164 lat (msec) : 20=0.38%, 250=5.78%, 500=47.30%, 750=46.55% 00:23:13.164 cpu : usr=0.00%, sys=0.56%, ctx=280, majf=0, minf=4097 00:23:13.164 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.3% 00:23:13.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.164 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:13.164 issued rwts: total=1332,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.164 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:13.164 job10: (groupid=0, jobs=1): err= 0: pid=102451: Wed Nov 27 06:25:30 2024 00:23:13.164 read: IOPS=119, BW=30.0MiB/s (31.4MB/s)(305MiB/10161msec) 00:23:13.164 slat (usec): min=12, max=398624, avg=8249.31, stdev=37920.41 00:23:13.164 clat (msec): min=25, max=1060, avg=524.81, stdev=117.19 00:23:13.164 lat (msec): min=26, max=1060, avg=533.06, stdev=121.49 00:23:13.164 clat percentiles (msec): 00:23:13.164 | 1.00th=[ 39], 5.00th=[ 401], 10.00th=[ 422], 20.00th=[ 447], 00:23:13.164 | 30.00th=[ 481], 40.00th=[ 489], 50.00th=[ 506], 60.00th=[ 523], 00:23:13.164 | 70.00th=[ 550], 80.00th=[ 592], 90.00th=[ 651], 95.00th=[ 751], 00:23:13.164 | 99.00th=[ 911], 99.50th=[ 995], 99.90th=[ 1062], 99.95th=[ 1062], 00:23:13.164 | 99.99th=[ 1062] 00:23:13.164 bw ( KiB/s): min=14336, max=42496, per=2.61%, avg=29539.15, stdev=6768.92, samples=20 00:23:13.164 iops : min= 56, max= 166, avg=115.35, stdev=26.43, samples=20 00:23:13.164 lat (msec) : 50=1.15%, 250=0.08%, 500=47.37%, 750=46.88%, 1000=4.19% 00:23:13.164 lat (msec) : 2000=0.33% 00:23:13.164 cpu : usr=0.02%, sys=0.46%, ctx=196, majf=0, minf=4097 00:23:13.164 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.6%, >=64=94.8% 00:23:13.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.164 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:13.164 issued rwts: total=1218,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.164 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:13.164 00:23:13.164 Run status group 0 (all jobs): 00:23:13.164 READ: bw=1104MiB/s (1158MB/s), 22.5MiB/s-552MiB/s (23.6MB/s-578MB/s), io=11.0GiB (11.8GB), run=10021-10165msec 00:23:13.164 00:23:13.164 Disk stats (read/write): 00:23:13.164 nvme0n1: ios=2816/0, merge=0/0, ticks=1227235/0, in_queue=1227235, util=98.47% 00:23:13.164 nvme10n1: ios=2612/0, merge=0/0, ticks=1239254/0, in_queue=1239254, util=98.62% 00:23:13.164 nvme1n1: ios=1702/0, merge=0/0, ticks=1234015/0, in_queue=1234015, util=98.64% 00:23:13.164 nvme2n1: ios=2722/0, merge=0/0, ticks=1236915/0, in_queue=1236915, util=98.76% 00:23:13.164 nvme3n1: ios=2622/0, merge=0/0, ticks=1241448/0, in_queue=1241448, util=98.74% 00:23:13.164 nvme4n1: ios=2753/0, merge=0/0, ticks=1237612/0, in_queue=1237612, util=98.79% 00:23:13.164 nvme5n1: ios=43227/0, merge=0/0, ticks=1204358/0, in_queue=1204358, util=98.74% 00:23:13.164 nvme6n1: ios=2885/0, merge=0/0, ticks=1223350/0, in_queue=1223350, util=98.81% 00:23:13.164 nvme7n1: ios=21086/0, merge=0/0, ticks=1221783/0, in_queue=1221783, util=98.90% 00:23:13.164 nvme8n1: ios=2564/0, merge=0/0, ticks=1240992/0, in_queue=1240992, util=99.01% 00:23:13.164 nvme9n1: ios=2372/0, merge=0/0, ticks=1239236/0, in_queue=1239236, util=99.04% 00:23:13.164 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:23:13.164 [global] 00:23:13.164 thread=1 00:23:13.164 invalidate=1 00:23:13.164 rw=randwrite 00:23:13.164 time_based=1 00:23:13.164 runtime=10 00:23:13.164 ioengine=libaio 00:23:13.164 direct=1 00:23:13.164 bs=262144 00:23:13.164 iodepth=64 00:23:13.164 norandommap=1 00:23:13.164 numjobs=1 00:23:13.164 00:23:13.164 [job0] 00:23:13.164 filename=/dev/nvme0n1 00:23:13.164 [job1] 00:23:13.164 filename=/dev/nvme10n1 00:23:13.164 [job2] 00:23:13.164 filename=/dev/nvme1n1 00:23:13.164 [job3] 00:23:13.164 filename=/dev/nvme2n1 00:23:13.164 [job4] 00:23:13.164 filename=/dev/nvme3n1 00:23:13.164 [job5] 00:23:13.164 filename=/dev/nvme4n1 00:23:13.164 [job6] 00:23:13.164 filename=/dev/nvme5n1 00:23:13.164 [job7] 00:23:13.164 filename=/dev/nvme6n1 00:23:13.164 [job8] 00:23:13.164 filename=/dev/nvme7n1 00:23:13.164 [job9] 00:23:13.164 filename=/dev/nvme8n1 00:23:13.164 [job10] 00:23:13.164 filename=/dev/nvme9n1 00:23:13.164 Could not set queue depth (nvme0n1) 00:23:13.164 Could not set queue depth (nvme10n1) 00:23:13.164 Could not set queue depth (nvme1n1) 00:23:13.164 Could not set queue depth (nvme2n1) 00:23:13.164 Could not set queue depth (nvme3n1) 00:23:13.164 Could not set queue depth (nvme4n1) 00:23:13.164 Could not set queue depth (nvme5n1) 00:23:13.164 Could not set queue depth (nvme6n1) 00:23:13.164 Could not set queue depth (nvme7n1) 00:23:13.164 Could not set queue depth (nvme8n1) 00:23:13.164 Could not set queue depth (nvme9n1) 00:23:13.164 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:13.164 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:13.164 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:13.164 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:13.164 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:13.164 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:13.164 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:13.164 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:13.164 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:13.164 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:13.164 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:13.164 fio-3.35 00:23:13.164 Starting 11 threads 00:23:23.141 00:23:23.141 job0: (groupid=0, jobs=1): err= 0: pid=102654: Wed Nov 27 06:25:41 2024 00:23:23.141 write: IOPS=862, BW=216MiB/s (226MB/s)(2167MiB/10049msec); 0 zone resets 00:23:23.141 slat (usec): min=14, max=27763, avg=1151.47, stdev=2146.60 00:23:23.141 clat (msec): min=2, max=234, avg=73.02, stdev=25.74 00:23:23.142 lat (msec): min=2, max=244, avg=74.17, stdev=26.04 00:23:23.142 clat percentiles (msec): 00:23:23.142 | 1.00th=[ 62], 5.00th=[ 64], 10.00th=[ 65], 20.00th=[ 66], 00:23:23.142 | 30.00th=[ 68], 40.00th=[ 69], 50.00th=[ 69], 60.00th=[ 69], 00:23:23.142 | 70.00th=[ 70], 80.00th=[ 70], 90.00th=[ 70], 95.00th=[ 72], 00:23:23.142 | 99.00th=[ 213], 99.50th=[ 224], 99.90th=[ 234], 99.95th=[ 234], 00:23:23.142 | 99.99th=[ 234] 00:23:23.142 bw ( KiB/s): min=75776, max=241664, per=13.70%, avg=220289.85, stdev=48490.78, samples=20 00:23:23.142 iops : min= 296, max= 944, avg=860.50, stdev=189.41, samples=20 00:23:23.142 lat (msec) : 4=0.02%, 20=0.05%, 50=0.22%, 100=95.27%, 250=4.44% 00:23:23.142 cpu : usr=1.31%, sys=1.66%, ctx=9402, majf=0, minf=1 00:23:23.142 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:23:23.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:23.142 issued rwts: total=0,8669,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.142 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:23.142 job1: (groupid=0, jobs=1): err= 0: pid=102655: Wed Nov 27 06:25:41 2024 00:23:23.142 write: IOPS=209, BW=52.3MiB/s (54.8MB/s)(529MiB/10122msec); 0 zone resets 00:23:23.142 slat (usec): min=15, max=68750, avg=4730.46, stdev=8704.10 00:23:23.142 clat (msec): min=19, max=426, avg=301.34, stdev=50.35 00:23:23.142 lat (msec): min=19, max=426, avg=306.07, stdev=50.45 00:23:23.142 clat percentiles (msec): 00:23:23.142 | 1.00th=[ 83], 5.00th=[ 207], 10.00th=[ 236], 20.00th=[ 292], 00:23:23.142 | 30.00th=[ 300], 40.00th=[ 313], 50.00th=[ 317], 60.00th=[ 317], 00:23:23.142 | 70.00th=[ 321], 80.00th=[ 330], 90.00th=[ 342], 95.00th=[ 351], 00:23:23.142 | 99.00th=[ 359], 99.50th=[ 363], 99.90th=[ 405], 99.95th=[ 426], 00:23:23.142 | 99.99th=[ 426] 00:23:23.142 bw ( KiB/s): min=45056, max=71823, per=3.27%, avg=52533.40, stdev=6476.73, samples=20 00:23:23.142 iops : min= 176, max= 280, avg=205.15, stdev=25.23, samples=20 00:23:23.142 lat (msec) : 20=0.19%, 50=0.38%, 100=0.57%, 250=13.99%, 500=84.88% 00:23:23.142 cpu : usr=0.43%, sys=0.43%, ctx=1935, majf=0, minf=1 00:23:23.142 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:23:23.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:23.142 issued rwts: total=0,2116,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.142 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:23.142 job2: (groupid=0, jobs=1): err= 0: pid=102662: Wed Nov 27 06:25:41 2024 00:23:23.142 write: IOPS=448, BW=112MiB/s (118MB/s)(1130MiB/10074msec); 0 zone resets 00:23:23.142 slat (usec): min=16, max=90112, avg=2172.47, stdev=4073.25 00:23:23.142 clat (msec): min=7, max=225, avg=140.43, stdev=24.67 00:23:23.142 lat (msec): min=8, max=227, avg=142.60, stdev=24.79 00:23:23.142 clat percentiles (msec): 00:23:23.142 | 1.00th=[ 35], 5.00th=[ 128], 10.00th=[ 131], 20.00th=[ 133], 00:23:23.142 | 30.00th=[ 136], 40.00th=[ 138], 50.00th=[ 140], 60.00th=[ 140], 00:23:23.142 | 70.00th=[ 142], 80.00th=[ 142], 90.00th=[ 146], 95.00th=[ 199], 00:23:23.142 | 99.00th=[ 220], 99.50th=[ 222], 99.90th=[ 224], 99.95th=[ 226], 00:23:23.142 | 99.99th=[ 226] 00:23:23.142 bw ( KiB/s): min=73875, max=148992, per=7.09%, avg=114095.10, stdev=14521.76, samples=20 00:23:23.142 iops : min= 288, max= 582, avg=445.65, stdev=56.81, samples=20 00:23:23.142 lat (msec) : 10=0.07%, 20=0.42%, 50=0.93%, 100=1.66%, 250=96.92% 00:23:23.142 cpu : usr=0.74%, sys=0.74%, ctx=5893, majf=0, minf=1 00:23:23.142 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:23:23.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:23.142 issued rwts: total=0,4520,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.142 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:23.142 job3: (groupid=0, jobs=1): err= 0: pid=102668: Wed Nov 27 06:25:41 2024 00:23:23.142 write: IOPS=211, BW=53.0MiB/s (55.5MB/s)(536MiB/10120msec); 0 zone resets 00:23:23.142 slat (usec): min=19, max=93327, avg=4667.67, stdev=8531.90 00:23:23.142 clat (msec): min=72, max=430, avg=297.33, stdev=43.59 00:23:23.142 lat (msec): min=72, max=431, avg=302.00, stdev=43.56 00:23:23.142 clat percentiles (msec): 00:23:23.142 | 1.00th=[ 134], 5.00th=[ 205], 10.00th=[ 224], 20.00th=[ 292], 00:23:23.142 | 30.00th=[ 300], 40.00th=[ 309], 50.00th=[ 313], 60.00th=[ 317], 00:23:23.142 | 70.00th=[ 321], 80.00th=[ 326], 90.00th=[ 330], 95.00th=[ 334], 00:23:23.142 | 99.00th=[ 338], 99.50th=[ 372], 99.90th=[ 414], 99.95th=[ 430], 00:23:23.142 | 99.99th=[ 430] 00:23:23.142 bw ( KiB/s): min=49053, max=71680, per=3.31%, avg=53268.65, stdev=5847.26, samples=20 00:23:23.142 iops : min= 191, max= 280, avg=208.05, stdev=22.86, samples=20 00:23:23.142 lat (msec) : 100=0.23%, 250=14.37%, 500=85.40% 00:23:23.142 cpu : usr=0.41%, sys=0.49%, ctx=1877, majf=0, minf=1 00:23:23.142 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:23:23.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:23.142 issued rwts: total=0,2144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.142 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:23.142 job4: (groupid=0, jobs=1): err= 0: pid=102669: Wed Nov 27 06:25:41 2024 00:23:23.142 write: IOPS=2043, BW=511MiB/s (536MB/s)(5121MiB/10026msec); 0 zone resets 00:23:23.142 slat (usec): min=12, max=7159, avg=486.70, stdev=812.67 00:23:23.142 clat (usec): min=1893, max=61277, avg=30831.60, stdev=3585.57 00:23:23.142 lat (usec): min=1919, max=61307, avg=31318.30, stdev=3558.32 00:23:23.142 clat percentiles (usec): 00:23:23.142 | 1.00th=[23462], 5.00th=[24773], 10.00th=[25297], 20.00th=[29230], 00:23:23.142 | 30.00th=[30278], 40.00th=[30802], 50.00th=[31065], 60.00th=[31327], 00:23:23.142 | 70.00th=[31589], 80.00th=[32113], 90.00th=[33424], 95.00th=[34866], 00:23:23.142 | 99.00th=[41681], 99.50th=[52691], 99.90th=[57934], 99.95th=[58459], 00:23:23.142 | 99.99th=[61080] 00:23:23.142 bw ( KiB/s): min=420681, max=654848, per=32.51%, avg=522767.65, stdev=43983.36, samples=20 00:23:23.142 iops : min= 1643, max= 2558, avg=2042.00, stdev=171.85, samples=20 00:23:23.142 lat (msec) : 2=0.01%, 10=0.02%, 20=0.05%, 50=99.28%, 100=0.64% 00:23:23.142 cpu : usr=2.48%, sys=2.55%, ctx=24705, majf=0, minf=1 00:23:23.142 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:23:23.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:23.142 issued rwts: total=0,20484,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.142 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:23.142 job5: (groupid=0, jobs=1): err= 0: pid=102670: Wed Nov 27 06:25:41 2024 00:23:23.142 write: IOPS=526, BW=132MiB/s (138MB/s)(1327MiB/10072msec); 0 zone resets 00:23:23.142 slat (usec): min=13, max=10098, avg=1851.50, stdev=3384.68 00:23:23.142 clat (msec): min=5, max=206, avg=119.59, stdev=34.78 00:23:23.142 lat (msec): min=5, max=206, avg=121.44, stdev=35.19 00:23:23.142 clat percentiles (msec): 00:23:23.142 | 1.00th=[ 50], 5.00th=[ 51], 10.00th=[ 54], 20.00th=[ 78], 00:23:23.142 | 30.00th=[ 131], 40.00th=[ 134], 50.00th=[ 138], 60.00th=[ 140], 00:23:23.142 | 70.00th=[ 140], 80.00th=[ 142], 90.00th=[ 142], 95.00th=[ 144], 00:23:23.142 | 99.00th=[ 153], 99.50th=[ 171], 99.90th=[ 199], 99.95th=[ 199], 00:23:23.142 | 99.99th=[ 207] 00:23:23.142 bw ( KiB/s): min=113152, max=301568, per=8.35%, avg=134261.25, stdev=51125.83, samples=20 00:23:23.142 iops : min= 442, max= 1178, avg=524.45, stdev=199.71, samples=20 00:23:23.142 lat (msec) : 10=0.08%, 20=0.08%, 50=2.45%, 100=19.77%, 250=77.63% 00:23:23.142 cpu : usr=0.75%, sys=0.85%, ctx=7554, majf=0, minf=1 00:23:23.142 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:23:23.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:23.142 issued rwts: total=0,5307,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.142 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:23.142 job6: (groupid=0, jobs=1): err= 0: pid=102672: Wed Nov 27 06:25:41 2024 00:23:23.142 write: IOPS=222, BW=55.7MiB/s (58.4MB/s)(564MiB/10119msec); 0 zone resets 00:23:23.142 slat (usec): min=17, max=31051, avg=4350.24, stdev=7899.40 00:23:23.142 clat (msec): min=20, max=437, avg=282.62, stdev=50.83 00:23:23.142 lat (msec): min=20, max=437, avg=286.97, stdev=51.21 00:23:23.142 clat percentiles (msec): 00:23:23.142 | 1.00th=[ 97], 5.00th=[ 167], 10.00th=[ 209], 20.00th=[ 271], 00:23:23.142 | 30.00th=[ 292], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 309], 00:23:23.142 | 70.00th=[ 313], 80.00th=[ 313], 90.00th=[ 317], 95.00th=[ 317], 00:23:23.142 | 99.00th=[ 326], 99.50th=[ 376], 99.90th=[ 418], 99.95th=[ 439], 00:23:23.142 | 99.99th=[ 439] 00:23:23.142 bw ( KiB/s): min=49664, max=81920, per=3.49%, avg=56135.65, stdev=8326.04, samples=20 00:23:23.142 iops : min= 194, max= 320, avg=219.25, stdev=32.54, samples=20 00:23:23.142 lat (msec) : 50=0.35%, 100=0.71%, 250=18.13%, 500=80.81% 00:23:23.142 cpu : usr=0.33%, sys=0.38%, ctx=3044, majf=0, minf=1 00:23:23.142 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:23:23.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:23.142 issued rwts: total=0,2256,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.142 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:23.142 job7: (groupid=0, jobs=1): err= 0: pid=102673: Wed Nov 27 06:25:41 2024 00:23:23.142 write: IOPS=202, BW=50.7MiB/s (53.2MB/s)(513MiB/10120msec); 0 zone resets 00:23:23.142 slat (usec): min=18, max=159441, avg=4881.46, stdev=9458.87 00:23:23.142 clat (msec): min=98, max=422, avg=310.67, stdev=40.31 00:23:23.142 lat (msec): min=121, max=422, avg=315.56, stdev=39.94 00:23:23.142 clat percentiles (msec): 00:23:23.142 | 1.00th=[ 171], 5.00th=[ 222], 10.00th=[ 239], 20.00th=[ 296], 00:23:23.142 | 30.00th=[ 309], 40.00th=[ 317], 50.00th=[ 321], 60.00th=[ 330], 00:23:23.142 | 70.00th=[ 334], 80.00th=[ 338], 90.00th=[ 342], 95.00th=[ 347], 00:23:23.142 | 99.00th=[ 355], 99.50th=[ 363], 99.90th=[ 401], 99.95th=[ 422], 00:23:23.142 | 99.99th=[ 422] 00:23:23.142 bw ( KiB/s): min=47104, max=69632, per=3.16%, avg=50892.95, stdev=4657.95, samples=20 00:23:23.143 iops : min= 184, max= 272, avg=198.75, stdev=18.18, samples=20 00:23:23.143 lat (msec) : 100=0.05%, 250=11.65%, 500=88.30% 00:23:23.143 cpu : usr=0.29%, sys=0.41%, ctx=2054, majf=0, minf=1 00:23:23.143 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:23:23.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.143 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:23.143 issued rwts: total=0,2052,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.143 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:23.143 job8: (groupid=0, jobs=1): err= 0: pid=102674: Wed Nov 27 06:25:41 2024 00:23:23.143 write: IOPS=859, BW=215MiB/s (225MB/s)(2160MiB/10051msec); 0 zone resets 00:23:23.143 slat (usec): min=17, max=51009, avg=1155.56, stdev=2224.04 00:23:23.143 clat (msec): min=9, max=239, avg=73.22, stdev=27.26 00:23:23.143 lat (msec): min=9, max=239, avg=74.38, stdev=27.59 00:23:23.143 clat percentiles (msec): 00:23:23.143 | 1.00th=[ 63], 5.00th=[ 64], 10.00th=[ 65], 20.00th=[ 66], 00:23:23.143 | 30.00th=[ 68], 40.00th=[ 69], 50.00th=[ 69], 60.00th=[ 69], 00:23:23.143 | 70.00th=[ 70], 80.00th=[ 70], 90.00th=[ 70], 95.00th=[ 71], 00:23:23.143 | 99.00th=[ 224], 99.50th=[ 232], 99.90th=[ 239], 99.95th=[ 239], 00:23:23.143 | 99.99th=[ 241] 00:23:23.143 bw ( KiB/s): min=75776, max=242176, per=13.65%, avg=219523.70, stdev=49965.14, samples=20 00:23:23.143 iops : min= 296, max= 946, avg=857.50, stdev=195.17, samples=20 00:23:23.143 lat (msec) : 10=0.02%, 20=0.02%, 50=0.27%, 100=95.46%, 250=4.22% 00:23:23.143 cpu : usr=1.19%, sys=1.52%, ctx=10841, majf=0, minf=1 00:23:23.143 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:23:23.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.143 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:23.143 issued rwts: total=0,8640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.143 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:23.143 job9: (groupid=0, jobs=1): err= 0: pid=102675: Wed Nov 27 06:25:41 2024 00:23:23.143 write: IOPS=523, BW=131MiB/s (137MB/s)(1320MiB/10080msec); 0 zone resets 00:23:23.143 slat (usec): min=15, max=15589, avg=1892.44, stdev=3431.56 00:23:23.143 clat (msec): min=6, max=219, avg=120.20, stdev=35.75 00:23:23.143 lat (msec): min=6, max=219, avg=122.09, stdev=36.15 00:23:23.143 clat percentiles (msec): 00:23:23.143 | 1.00th=[ 50], 5.00th=[ 51], 10.00th=[ 53], 20.00th=[ 77], 00:23:23.143 | 30.00th=[ 131], 40.00th=[ 134], 50.00th=[ 138], 60.00th=[ 140], 00:23:23.143 | 70.00th=[ 140], 80.00th=[ 142], 90.00th=[ 144], 95.00th=[ 144], 00:23:23.143 | 99.00th=[ 178], 99.50th=[ 186], 99.90th=[ 211], 99.95th=[ 211], 00:23:23.143 | 99.99th=[ 220] 00:23:23.143 bw ( KiB/s): min=106496, max=304128, per=8.31%, avg=133557.40, stdev=52196.82, samples=20 00:23:23.143 iops : min= 416, max= 1188, avg=521.70, stdev=203.90, samples=20 00:23:23.143 lat (msec) : 10=0.27%, 20=0.21%, 50=2.61%, 100=19.01%, 250=77.90% 00:23:23.143 cpu : usr=0.71%, sys=0.71%, ctx=7516, majf=0, minf=1 00:23:23.143 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:23:23.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.143 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:23.143 issued rwts: total=0,5281,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.143 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:23.143 job10: (groupid=0, jobs=1): err= 0: pid=102676: Wed Nov 27 06:25:41 2024 00:23:23.143 write: IOPS=208, BW=52.2MiB/s (54.8MB/s)(529MiB/10120msec); 0 zone resets 00:23:23.143 slat (usec): min=18, max=95880, avg=4728.17, stdev=8704.77 00:23:23.143 clat (msec): min=2, max=443, avg=301.53, stdev=45.35 00:23:23.143 lat (msec): min=2, max=443, avg=306.26, stdev=45.27 00:23:23.143 clat percentiles (msec): 00:23:23.143 | 1.00th=[ 69], 5.00th=[ 224], 10.00th=[ 243], 20.00th=[ 292], 00:23:23.143 | 30.00th=[ 300], 40.00th=[ 309], 50.00th=[ 313], 60.00th=[ 317], 00:23:23.143 | 70.00th=[ 321], 80.00th=[ 326], 90.00th=[ 334], 95.00th=[ 338], 00:23:23.143 | 99.00th=[ 347], 99.50th=[ 384], 99.90th=[ 426], 99.95th=[ 443], 00:23:23.143 | 99.99th=[ 443] 00:23:23.143 bw ( KiB/s): min=47104, max=71680, per=3.26%, avg=52500.25, stdev=5139.35, samples=20 00:23:23.143 iops : min= 184, max= 280, avg=205.05, stdev=20.07, samples=20 00:23:23.143 lat (msec) : 4=0.09%, 20=0.19%, 50=0.38%, 100=0.76%, 250=10.08% 00:23:23.143 lat (msec) : 500=88.51% 00:23:23.143 cpu : usr=0.38%, sys=0.48%, ctx=2054, majf=0, minf=1 00:23:23.143 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:23:23.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.143 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:23.143 issued rwts: total=0,2114,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.143 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:23.143 00:23:23.143 Run status group 0 (all jobs): 00:23:23.143 WRITE: bw=1570MiB/s (1647MB/s), 50.7MiB/s-511MiB/s (53.2MB/s-536MB/s), io=15.5GiB (16.7GB), run=10026-10122msec 00:23:23.143 00:23:23.143 Disk stats (read/write): 00:23:23.143 nvme0n1: ios=50/16849, merge=0/0, ticks=22/1195225, in_queue=1195247, util=98.43% 00:23:23.143 nvme10n1: ios=49/4134, merge=0/0, ticks=25/1212998, in_queue=1213023, util=98.51% 00:23:23.143 nvme1n1: ios=49/8986, merge=0/0, ticks=30/1225327, in_queue=1225357, util=98.59% 00:23:23.143 nvme2n1: ios=36/4190, merge=0/0, ticks=14/1213213, in_queue=1213227, util=98.52% 00:23:23.143 nvme3n1: ios=35/40022, merge=0/0, ticks=31/1197676, in_queue=1197707, util=98.51% 00:23:23.143 nvme4n1: ios=0/10554, merge=0/0, ticks=0/1224353, in_queue=1224353, util=98.63% 00:23:23.143 nvme5n1: ios=0/4421, merge=0/0, ticks=0/1214890, in_queue=1214890, util=98.70% 00:23:23.143 nvme6n1: ios=0/4001, merge=0/0, ticks=0/1211865, in_queue=1211865, util=98.72% 00:23:23.143 nvme7n1: ios=0/16800, merge=0/0, ticks=0/1194697, in_queue=1194697, util=98.87% 00:23:23.143 nvme8n1: ios=0/10516, merge=0/0, ticks=0/1224990, in_queue=1224990, util=99.00% 00:23:23.143 nvme9n1: ios=0/4136, merge=0/0, ticks=0/1213522, in_queue=1213522, util=98.96% 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:23.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:23:23.143 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:23:23.143 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.143 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:23:23.144 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:23:23.144 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:23:23.144 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:23:23.144 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:23.144 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:23:23.144 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:23:23.144 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:23:23.144 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:23:23.144 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:23:23.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:23:23.145 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:23:23.145 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:23:23.145 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:23:23.145 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.145 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.145 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.145 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:23:23.145 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:23:23.145 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:23:23.145 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:23.145 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:23:23.145 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:23.145 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:23:23.145 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:23.145 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:23.145 rmmod nvme_tcp 00:23:23.145 rmmod nvme_fabrics 00:23:23.145 rmmod nvme_keyring 00:23:23.145 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:23.145 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:23:23.145 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:23:23.145 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 101975 ']' 00:23:23.145 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 101975 00:23:23.145 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 101975 ']' 00:23:23.145 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 101975 00:23:23.145 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:23:23.145 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:23.145 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101975 00:23:23.145 killing process with pid 101975 00:23:23.145 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:23.145 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:23.145 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101975' 00:23:23.145 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 101975 00:23:23.145 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 101975 00:23:23.404 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:23.404 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:23.404 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:23.404 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:23:23.404 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:23:23.404 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:23:23.404 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:23.404 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:23.404 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:23.404 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:23.404 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:23.404 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:23.404 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:23.404 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:23.404 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:23.404 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:23.404 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:23.404 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:23.404 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:23.404 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:23.404 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:23.404 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:23.404 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:23.404 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.404 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:23.404 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.662 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@300 -- # return 0 00:23:23.662 00:23:23.662 real 0m48.137s 00:23:23.662 user 2m56.781s 00:23:23.662 sys 0m12.419s 00:23:23.662 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:23.662 ************************************ 00:23:23.662 END TEST nvmf_multiconnection 00:23:23.662 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:23.662 ************************************ 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:23.662 ************************************ 00:23:23.662 START TEST nvmf_initiator_timeout 00:23:23.662 ************************************ 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:23:23.662 * Looking for test storage... 00:23:23.662 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:23.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.662 --rc genhtml_branch_coverage=1 00:23:23.662 --rc genhtml_function_coverage=1 00:23:23.662 --rc genhtml_legend=1 00:23:23.662 --rc geninfo_all_blocks=1 00:23:23.662 --rc geninfo_unexecuted_blocks=1 00:23:23.662 00:23:23.662 ' 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:23.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.662 --rc genhtml_branch_coverage=1 00:23:23.662 --rc genhtml_function_coverage=1 00:23:23.662 --rc genhtml_legend=1 00:23:23.662 --rc geninfo_all_blocks=1 00:23:23.662 --rc geninfo_unexecuted_blocks=1 00:23:23.662 00:23:23.662 ' 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:23.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.662 --rc genhtml_branch_coverage=1 00:23:23.662 --rc genhtml_function_coverage=1 00:23:23.662 --rc genhtml_legend=1 00:23:23.662 --rc geninfo_all_blocks=1 00:23:23.662 --rc geninfo_unexecuted_blocks=1 00:23:23.662 00:23:23.662 ' 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:23.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.662 --rc genhtml_branch_coverage=1 00:23:23.662 --rc genhtml_function_coverage=1 00:23:23.662 --rc genhtml_legend=1 00:23:23.662 --rc geninfo_all_blocks=1 00:23:23.662 --rc geninfo_unexecuted_blocks=1 00:23:23.662 00:23:23.662 ' 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:23:23.662 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:23.922 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:23.922 Cannot find device "nvmf_init_br" 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:23.922 Cannot find device "nvmf_init_br2" 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:23.922 Cannot find device "nvmf_tgt_br" 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # true 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:23.922 Cannot find device "nvmf_tgt_br2" 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # true 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:23.922 Cannot find device "nvmf_init_br" 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # true 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:23.922 Cannot find device "nvmf_init_br2" 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # true 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:23.922 Cannot find device "nvmf_tgt_br" 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # true 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:23.922 Cannot find device "nvmf_tgt_br2" 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # true 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:23.922 Cannot find device "nvmf_br" 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # true 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:23.922 Cannot find device "nvmf_init_if" 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # true 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:23.922 Cannot find device "nvmf_init_if2" 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # true 00:23:23.922 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:23.923 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:23.923 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # true 00:23:23.923 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:23.923 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:23.923 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # true 00:23:23.923 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:23.923 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:23.923 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:23.923 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:23.923 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:23.923 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:23.923 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:23.923 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:23.923 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:23.923 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:23.923 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:23.923 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:23.923 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:23.923 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:23.923 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:23.923 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:23.923 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:23.923 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:23.923 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:23.923 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:23.923 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:23.923 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:23.923 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:23.923 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:23.923 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:23.923 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:24.182 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:24.182 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:24.182 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:24.182 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:24.182 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:24.182 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:24.182 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:24.182 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:24.182 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:23:24.182 00:23:24.182 --- 10.0.0.3 ping statistics --- 00:23:24.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.182 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:23:24.182 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:24.182 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:24.182 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:23:24.182 00:23:24.182 --- 10.0.0.4 ping statistics --- 00:23:24.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.182 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:23:24.182 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:24.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:24.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:23:24.182 00:23:24.182 --- 10.0.0.1 ping statistics --- 00:23:24.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.182 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:23:24.182 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:24.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:24.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:23:24.182 00:23:24.182 --- 10.0.0.2 ping statistics --- 00:23:24.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.182 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:23:24.182 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:24.182 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@461 -- # return 0 00:23:24.182 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:24.182 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:24.182 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:24.182 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:24.182 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:24.182 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:24.182 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:24.182 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:23:24.182 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:24.182 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:24.182 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:24.182 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=103099 00:23:24.182 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 103099 00:23:24.182 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:24.182 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 103099 ']' 00:23:24.182 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.182 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:24.182 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.182 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:24.182 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:24.182 [2024-11-27 06:25:43.546508] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:23:24.182 [2024-11-27 06:25:43.546559] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.182 [2024-11-27 06:25:43.667159] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:24.182 [2024-11-27 06:25:43.689477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:24.182 [2024-11-27 06:25:43.708135] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:24.182 [2024-11-27 06:25:43.708172] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:24.182 [2024-11-27 06:25:43.708179] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:24.182 [2024-11-27 06:25:43.708184] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:24.182 [2024-11-27 06:25:43.708188] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:24.182 [2024-11-27 06:25:43.709000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.182 [2024-11-27 06:25:43.709370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.182 [2024-11-27 06:25:43.709703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:24.182 [2024-11-27 06:25:43.709866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.440 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:24.440 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:23:24.440 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:24.440 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:24.440 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:24.440 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.440 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:24.440 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:24.440 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.440 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:24.440 Malloc0 00:23:24.440 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.440 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:23:24.440 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.440 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:24.440 Delay0 00:23:24.440 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.440 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:24.440 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.440 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:24.440 [2024-11-27 06:25:43.878308] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.440 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.440 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:24.440 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.440 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:24.440 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.440 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:24.440 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.440 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:24.440 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.441 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:24.441 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.441 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:24.441 [2024-11-27 06:25:43.906472] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:24.441 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.441 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:23:24.699 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:23:24.699 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:23:24.699 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:23:24.699 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:23:24.699 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:23:26.599 06:25:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:23:26.599 06:25:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:23:26.599 06:25:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:23:26.599 06:25:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:23:26.599 06:25:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:23:26.599 06:25:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:23:26.599 06:25:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=103168 00:23:26.599 06:25:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:23:26.599 06:25:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:23:26.599 [global] 00:23:26.599 thread=1 00:23:26.599 invalidate=1 00:23:26.599 rw=write 00:23:26.599 time_based=1 00:23:26.599 runtime=60 00:23:26.599 ioengine=libaio 00:23:26.599 direct=1 00:23:26.599 bs=4096 00:23:26.599 iodepth=1 00:23:26.599 norandommap=0 00:23:26.599 numjobs=1 00:23:26.599 00:23:26.599 verify_dump=1 00:23:26.599 verify_backlog=512 00:23:26.599 verify_state_save=0 00:23:26.599 do_verify=1 00:23:26.599 verify=crc32c-intel 00:23:26.599 [job0] 00:23:26.599 filename=/dev/nvme0n1 00:23:26.911 Could not set queue depth (nvme0n1) 00:23:26.911 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:26.911 fio-3.35 00:23:26.912 Starting 1 thread 00:23:30.194 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:23:30.194 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.194 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:30.194 true 00:23:30.194 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.194 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:23:30.194 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.194 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:30.194 true 00:23:30.194 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.194 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:23:30.194 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.194 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:30.194 true 00:23:30.194 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.194 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:23:30.194 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.194 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:30.194 true 00:23:30.194 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.194 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:23:32.728 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:23:32.728 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.728 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:32.728 true 00:23:32.728 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.728 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:23:32.728 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.728 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:32.728 true 00:23:32.728 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.728 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:23:32.728 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.728 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:32.728 true 00:23:32.728 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.728 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:23:32.728 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.728 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:32.728 true 00:23:32.729 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.729 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:23:32.729 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 103168 00:24:28.981 00:24:28.981 job0: (groupid=0, jobs=1): err= 0: pid=103189: Wed Nov 27 06:26:46 2024 00:24:28.981 read: IOPS=1312, BW=5250KiB/s (5376kB/s)(308MiB/60000msec) 00:24:28.981 slat (usec): min=4, max=9143, avg= 5.72, stdev=42.95 00:24:28.981 clat (usec): min=99, max=40621k, avg=641.56, stdev=144749.58 00:24:28.981 lat (usec): min=108, max=40621k, avg=647.28, stdev=144749.61 00:24:28.981 clat percentiles (usec): 00:24:28.981 | 1.00th=[ 111], 5.00th=[ 114], 10.00th=[ 116], 20.00th=[ 118], 00:24:28.981 | 30.00th=[ 120], 40.00th=[ 122], 50.00th=[ 124], 60.00th=[ 125], 00:24:28.981 | 70.00th=[ 128], 80.00th=[ 131], 90.00th=[ 137], 95.00th=[ 149], 00:24:28.981 | 99.00th=[ 165], 99.50th=[ 172], 99.90th=[ 190], 99.95th=[ 281], 00:24:28.981 | 99.99th=[ 433] 00:24:28.981 write: IOPS=1314, BW=5257KiB/s (5383kB/s)(308MiB/60000msec); 0 zone resets 00:24:28.981 slat (usec): min=6, max=788, avg= 8.54, stdev= 4.17 00:24:28.981 clat (usec): min=42, max=734, avg=104.49, stdev=11.67 00:24:28.981 lat (usec): min=93, max=894, avg=113.03, stdev=13.11 00:24:28.981 clat percentiles (usec): 00:24:28.981 | 1.00th=[ 91], 5.00th=[ 94], 10.00th=[ 96], 20.00th=[ 98], 00:24:28.981 | 30.00th=[ 100], 40.00th=[ 101], 50.00th=[ 103], 60.00th=[ 104], 00:24:28.981 | 70.00th=[ 106], 80.00th=[ 110], 90.00th=[ 116], 95.00th=[ 123], 00:24:28.981 | 99.00th=[ 137], 99.50th=[ 141], 99.90th=[ 225], 99.95th=[ 310], 00:24:28.981 | 99.99th=[ 453] 00:24:28.981 bw ( KiB/s): min= 904, max=17528, per=100.00%, avg=15811.69, stdev=2904.82, samples=39 00:24:28.981 iops : min= 226, max= 4382, avg=3952.92, stdev=726.21, samples=39 00:24:28.981 lat (usec) : 50=0.01%, 100=16.85%, 250=83.08%, 500=0.07%, 750=0.01% 00:24:28.981 lat (msec) : 2=0.01%, 50=0.01%, >=2000=0.01% 00:24:28.981 cpu : usr=0.40%, sys=1.40%, ctx=157622, majf=0, minf=5 00:24:28.981 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:28.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:28.981 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:28.981 issued rwts: total=78753,78848,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:28.981 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:28.981 00:24:28.981 Run status group 0 (all jobs): 00:24:28.981 READ: bw=5250KiB/s (5376kB/s), 5250KiB/s-5250KiB/s (5376kB/s-5376kB/s), io=308MiB (323MB), run=60000-60000msec 00:24:28.981 WRITE: bw=5257KiB/s (5383kB/s), 5257KiB/s-5257KiB/s (5383kB/s-5383kB/s), io=308MiB (323MB), run=60000-60000msec 00:24:28.981 00:24:28.981 Disk stats (read/write): 00:24:28.981 nvme0n1: ios=78634/78434, merge=0/0, ticks=10020/8393, in_queue=18413, util=99.92% 00:24:28.981 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:28.981 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:28.981 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:28.981 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:24:28.981 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:28.981 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:24:28.981 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:24:28.981 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:28.981 nvmf hotplug test: fio successful as expected 00:24:28.981 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:24:28.981 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:24:28.981 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:24:28.981 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:28.981 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.981 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:28.981 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.981 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:24:28.981 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:24:28.981 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:24:28.981 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:28.981 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:24:28.981 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:28.981 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:24:28.981 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:28.981 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:28.981 rmmod nvme_tcp 00:24:28.981 rmmod nvme_fabrics 00:24:28.981 rmmod nvme_keyring 00:24:28.981 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:28.981 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:24:28.981 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:24:28.981 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 103099 ']' 00:24:28.981 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 103099 00:24:28.981 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 103099 ']' 00:24:28.981 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 103099 00:24:28.981 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:24:28.981 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:28.981 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103099 00:24:28.982 killing process with pid 103099 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103099' 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 103099 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 103099 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@300 -- # return 0 00:24:28.982 00:24:28.982 real 1m3.815s 00:24:28.982 user 4m4.927s 00:24:28.982 sys 0m6.514s 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:28.982 ************************************ 00:24:28.982 END TEST nvmf_initiator_timeout 00:24:28.982 ************************************ 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:28.982 ************************************ 00:24:28.982 START TEST nvmf_nsid 00:24:28.982 ************************************ 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:28.982 * Looking for test storage... 00:24:28.982 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:24:28.982 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:24:28.982 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:24:28.982 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:24:28.982 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:28.982 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:24:28.982 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:24:28.982 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:28.982 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:28.982 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:24:28.982 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:28.982 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:28.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.982 --rc genhtml_branch_coverage=1 00:24:28.982 --rc genhtml_function_coverage=1 00:24:28.982 --rc genhtml_legend=1 00:24:28.982 --rc geninfo_all_blocks=1 00:24:28.982 --rc geninfo_unexecuted_blocks=1 00:24:28.982 00:24:28.982 ' 00:24:28.982 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:28.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.982 --rc genhtml_branch_coverage=1 00:24:28.982 --rc genhtml_function_coverage=1 00:24:28.982 --rc genhtml_legend=1 00:24:28.982 --rc geninfo_all_blocks=1 00:24:28.982 --rc geninfo_unexecuted_blocks=1 00:24:28.982 00:24:28.982 ' 00:24:28.982 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:28.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.982 --rc genhtml_branch_coverage=1 00:24:28.982 --rc genhtml_function_coverage=1 00:24:28.982 --rc genhtml_legend=1 00:24:28.982 --rc geninfo_all_blocks=1 00:24:28.982 --rc geninfo_unexecuted_blocks=1 00:24:28.982 00:24:28.982 ' 00:24:28.982 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:28.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.982 --rc genhtml_branch_coverage=1 00:24:28.982 --rc genhtml_function_coverage=1 00:24:28.982 --rc genhtml_legend=1 00:24:28.982 --rc geninfo_all_blocks=1 00:24:28.982 --rc geninfo_unexecuted_blocks=1 00:24:28.982 00:24:28.982 ' 00:24:28.982 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:28.982 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:24:28.982 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.982 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.982 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.982 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.982 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.982 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.982 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.982 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.982 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.982 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:28.983 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:28.983 Cannot find device "nvmf_init_br" 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:28.983 Cannot find device "nvmf_init_br2" 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:28.983 Cannot find device "nvmf_tgt_br" 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:28.983 Cannot find device "nvmf_tgt_br2" 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:28.983 Cannot find device "nvmf_init_br" 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:28.983 Cannot find device "nvmf_init_br2" 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:28.983 Cannot find device "nvmf_tgt_br" 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:28.983 Cannot find device "nvmf_tgt_br2" 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:28.983 Cannot find device "nvmf_br" 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:28.983 Cannot find device "nvmf_init_if" 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:28.983 Cannot find device "nvmf_init_if2" 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:28.983 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:28.983 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:28.983 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:28.984 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:28.984 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:24:28.984 00:24:28.984 --- 10.0.0.3 ping statistics --- 00:24:28.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.984 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:28.984 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:28.984 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.030 ms 00:24:28.984 00:24:28.984 --- 10.0.0.4 ping statistics --- 00:24:28.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.984 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:28.984 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.984 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:24:28.984 00:24:28.984 --- 10.0.0.1 ping statistics --- 00:24:28.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.984 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:28.984 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.984 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:24:28.984 00:24:28.984 --- 10.0.0.2 ping statistics --- 00:24:28.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.984 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=104064 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 104064 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 104064 ']' 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:28.984 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:24:28.984 [2024-11-27 06:26:47.357855] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:24:28.984 [2024-11-27 06:26:47.357899] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.984 [2024-11-27 06:26:47.470034] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:28.984 [2024-11-27 06:26:47.497815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.984 [2024-11-27 06:26:47.515294] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:28.984 [2024-11-27 06:26:47.515333] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:28.984 [2024-11-27 06:26:47.515339] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:28.984 [2024-11-27 06:26:47.515344] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:28.984 [2024-11-27 06:26:47.515348] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:28.984 [2024-11-27 06:26:47.515594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.984 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:28.984 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:24:28.984 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:28.984 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:28.984 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:28.984 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:28.984 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:28.984 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=104108 00:24:28.984 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:24:28.984 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:24:28.984 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:24:28.984 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:28.984 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:28.984 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.984 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.984 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:28.984 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.984 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:28.984 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:28.984 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:28.984 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:24:28.984 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:24:28.984 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:24:28.984 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=10a41ac0-b5be-4412-ad8a-dec8aaf37804 00:24:28.985 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:24:28.985 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=6e08209c-352b-4898-b9c7-9c9661e92c6b 00:24:28.985 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:24:28.985 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=91e6e26d-579f-4d05-8c37-beb33d452a12 00:24:28.985 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:24:28.985 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.985 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:28.985 null0 00:24:28.985 null1 00:24:28.985 null2 00:24:28.985 [2024-11-27 06:26:48.225746] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:24:28.985 [2024-11-27 06:26:48.225805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104108 ] 00:24:28.985 [2024-11-27 06:26:48.226886] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:28.985 [2024-11-27 06:26:48.250968] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:28.985 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.985 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 104108 /var/tmp/tgt2.sock 00:24:28.985 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 104108 ']' 00:24:28.985 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:24:28.985 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:28.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:24:28.985 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:24:28.985 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:28.985 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:28.985 [2024-11-27 06:26:48.340409] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:28.985 [2024-11-27 06:26:48.370450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.985 [2024-11-27 06:26:48.389403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:29.254 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:29.254 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:24:29.254 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:24:29.544 [2024-11-27 06:26:48.883499] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:29.544 [2024-11-27 06:26:48.899555] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:24:29.544 nvme0n1 nvme0n2 00:24:29.544 nvme1n1 00:24:29.544 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:24:29.544 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:24:29.544 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:24:29.544 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:24:29.544 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:24:29.544 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:24:29.544 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:24:29.544 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:24:29.801 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:24:29.801 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:24:29.801 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:29.801 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:29.801 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:29.801 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:24:29.801 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:24:29.801 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:24:30.734 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:30.734 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:30.734 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:30.734 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:24:30.734 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:30.734 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 10a41ac0-b5be-4412-ad8a-dec8aaf37804 00:24:30.734 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:30.734 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:24:30.734 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:24:30.734 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:24:30.734 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:30.734 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=10a41ac0b5be4412ad8adec8aaf37804 00:24:30.734 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 10A41AC0B5BE4412AD8ADEC8AAF37804 00:24:30.734 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 10A41AC0B5BE4412AD8ADEC8AAF37804 == \1\0\A\4\1\A\C\0\B\5\B\E\4\4\1\2\A\D\8\A\D\E\C\8\A\A\F\3\7\8\0\4 ]] 00:24:30.734 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:24:30.734 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:30.734 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:30.734 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:24:30.734 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:24:30.734 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:30.734 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:30.734 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 6e08209c-352b-4898-b9c7-9c9661e92c6b 00:24:30.734 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:30.734 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:24:30.734 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:24:30.734 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:24:30.735 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:30.735 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=6e08209c352b4898b9c79c9661e92c6b 00:24:30.735 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 6E08209C352B4898B9C79C9661E92C6B 00:24:30.735 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 6E08209C352B4898B9C79C9661E92C6B == \6\E\0\8\2\0\9\C\3\5\2\B\4\8\9\8\B\9\C\7\9\C\9\6\6\1\E\9\2\C\6\B ]] 00:24:30.735 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:24:30.735 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:30.735 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:24:30.735 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:30.735 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:24:30.735 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:30.735 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:30.735 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 91e6e26d-579f-4d05-8c37-beb33d452a12 00:24:30.735 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:30.735 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:24:30.735 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:24:30.735 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:24:30.735 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:30.735 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=91e6e26d579f4d058c37beb33d452a12 00:24:30.735 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 91E6E26D579F4D058C37BEB33D452A12 00:24:30.735 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 91E6E26D579F4D058C37BEB33D452A12 == \9\1\E\6\E\2\6\D\5\7\9\F\4\D\0\5\8\C\3\7\B\E\B\3\3\D\4\5\2\A\1\2 ]] 00:24:30.735 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:24:30.994 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:24:30.994 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:24:30.994 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 104108 00:24:30.994 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 104108 ']' 00:24:30.994 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 104108 00:24:30.994 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:30.994 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:30.994 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104108 00:24:30.994 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:30.994 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:30.994 killing process with pid 104108 00:24:30.994 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104108' 00:24:30.994 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 104108 00:24:30.994 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 104108 00:24:31.253 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:24:31.253 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:31.253 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:24:31.253 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:31.253 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:24:31.253 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:31.253 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:31.253 rmmod nvme_tcp 00:24:31.253 rmmod nvme_fabrics 00:24:31.253 rmmod nvme_keyring 00:24:31.253 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:31.253 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:24:31.253 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:24:31.253 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 104064 ']' 00:24:31.253 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 104064 00:24:31.253 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 104064 ']' 00:24:31.253 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 104064 00:24:31.253 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:31.253 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:31.253 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104064 00:24:31.253 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:31.253 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:31.253 killing process with pid 104064 00:24:31.253 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104064' 00:24:31.253 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 104064 00:24:31.253 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 104064 00:24:31.513 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:31.513 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:31.513 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:31.513 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:24:31.513 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:31.513 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:24:31.513 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:24:31.513 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:31.513 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:31.513 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:31.513 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:31.513 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:31.513 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:31.513 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:31.513 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:31.513 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:31.513 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:31.513 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:31.513 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:31.513 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:31.513 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:31.513 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:31.513 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:31.513 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.513 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:31.513 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.775 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:24:31.776 ************************************ 00:24:31.776 END TEST nvmf_nsid 00:24:31.776 ************************************ 00:24:31.776 00:24:31.776 real 0m4.190s 00:24:31.776 user 0m6.341s 00:24:31.776 sys 0m0.967s 00:24:31.776 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:31.776 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:31.776 06:26:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:24:31.776 ************************************ 00:24:31.776 END TEST nvmf_target_extra 00:24:31.776 ************************************ 00:24:31.776 00:24:31.776 real 11m46.005s 00:24:31.776 user 36m19.685s 00:24:31.776 sys 1m38.448s 00:24:31.776 06:26:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:31.776 06:26:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:31.776 06:26:51 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:31.776 06:26:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:31.776 06:26:51 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:31.776 06:26:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:31.776 ************************************ 00:24:31.776 START TEST nvmf_host 00:24:31.776 ************************************ 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:31.776 * Looking for test storage... 00:24:31.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:31.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.776 --rc genhtml_branch_coverage=1 00:24:31.776 --rc genhtml_function_coverage=1 00:24:31.776 --rc genhtml_legend=1 00:24:31.776 --rc geninfo_all_blocks=1 00:24:31.776 --rc geninfo_unexecuted_blocks=1 00:24:31.776 00:24:31.776 ' 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:31.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.776 --rc genhtml_branch_coverage=1 00:24:31.776 --rc genhtml_function_coverage=1 00:24:31.776 --rc genhtml_legend=1 00:24:31.776 --rc geninfo_all_blocks=1 00:24:31.776 --rc geninfo_unexecuted_blocks=1 00:24:31.776 00:24:31.776 ' 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:31.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.776 --rc genhtml_branch_coverage=1 00:24:31.776 --rc genhtml_function_coverage=1 00:24:31.776 --rc genhtml_legend=1 00:24:31.776 --rc geninfo_all_blocks=1 00:24:31.776 --rc geninfo_unexecuted_blocks=1 00:24:31.776 00:24:31.776 ' 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:31.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.776 --rc genhtml_branch_coverage=1 00:24:31.776 --rc genhtml_function_coverage=1 00:24:31.776 --rc genhtml_legend=1 00:24:31.776 --rc geninfo_all_blocks=1 00:24:31.776 --rc geninfo_unexecuted_blocks=1 00:24:31.776 00:24:31.776 ' 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:31.776 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:31.776 06:26:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:24:31.777 06:26:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:24:31.777 06:26:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:24:31.777 06:26:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:31.777 06:26:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:31.777 06:26:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:31.777 06:26:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.777 ************************************ 00:24:31.777 START TEST nvmf_multicontroller 00:24:31.777 ************************************ 00:24:31.777 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:32.037 * Looking for test storage... 00:24:32.037 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:32.037 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:32.037 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:32.037 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:24:32.037 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:32.037 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:32.037 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:32.037 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:32.037 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:24:32.037 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:24:32.037 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:24:32.037 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:24:32.037 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:24:32.037 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:24:32.037 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:24:32.037 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:32.037 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:24:32.037 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:24:32.037 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:32.037 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:32.037 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:24:32.037 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:24:32.037 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:32.037 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:24:32.037 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:24:32.037 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:24:32.037 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:24:32.037 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:32.037 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:24:32.037 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:24:32.037 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:32.037 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:32.037 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:24:32.037 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:32.037 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:32.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.037 --rc genhtml_branch_coverage=1 00:24:32.037 --rc genhtml_function_coverage=1 00:24:32.037 --rc genhtml_legend=1 00:24:32.037 --rc geninfo_all_blocks=1 00:24:32.037 --rc geninfo_unexecuted_blocks=1 00:24:32.037 00:24:32.037 ' 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:32.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.038 --rc genhtml_branch_coverage=1 00:24:32.038 --rc genhtml_function_coverage=1 00:24:32.038 --rc genhtml_legend=1 00:24:32.038 --rc geninfo_all_blocks=1 00:24:32.038 --rc geninfo_unexecuted_blocks=1 00:24:32.038 00:24:32.038 ' 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:32.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.038 --rc genhtml_branch_coverage=1 00:24:32.038 --rc genhtml_function_coverage=1 00:24:32.038 --rc genhtml_legend=1 00:24:32.038 --rc geninfo_all_blocks=1 00:24:32.038 --rc geninfo_unexecuted_blocks=1 00:24:32.038 00:24:32.038 ' 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:32.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.038 --rc genhtml_branch_coverage=1 00:24:32.038 --rc genhtml_function_coverage=1 00:24:32.038 --rc genhtml_legend=1 00:24:32.038 --rc geninfo_all_blocks=1 00:24:32.038 --rc geninfo_unexecuted_blocks=1 00:24:32.038 00:24:32.038 ' 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:32.038 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:32.038 Cannot find device "nvmf_init_br" 00:24:32.038 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:24:32.039 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:32.039 Cannot find device "nvmf_init_br2" 00:24:32.039 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:24:32.039 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:32.039 Cannot find device "nvmf_tgt_br" 00:24:32.039 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # true 00:24:32.039 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:32.039 Cannot find device "nvmf_tgt_br2" 00:24:32.039 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # true 00:24:32.039 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:32.039 Cannot find device "nvmf_init_br" 00:24:32.039 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # true 00:24:32.039 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:32.039 Cannot find device "nvmf_init_br2" 00:24:32.039 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # true 00:24:32.039 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:32.039 Cannot find device "nvmf_tgt_br" 00:24:32.039 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # true 00:24:32.039 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:32.039 Cannot find device "nvmf_tgt_br2" 00:24:32.039 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # true 00:24:32.039 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:32.039 Cannot find device "nvmf_br" 00:24:32.039 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # true 00:24:32.039 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:32.039 Cannot find device "nvmf_init_if" 00:24:32.039 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # true 00:24:32.039 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:32.039 Cannot find device "nvmf_init_if2" 00:24:32.039 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # true 00:24:32.039 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:32.039 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:32.039 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # true 00:24:32.039 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:32.039 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:32.039 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # true 00:24:32.039 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:32.298 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:32.298 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:24:32.298 00:24:32.298 --- 10.0.0.3 ping statistics --- 00:24:32.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.298 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:32.298 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:32.298 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:24:32.298 00:24:32.298 --- 10.0.0.4 ping statistics --- 00:24:32.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.298 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:32.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:32.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:24:32.298 00:24:32.298 --- 10.0.0.1 ping statistics --- 00:24:32.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.298 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:32.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:32.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.037 ms 00:24:32.298 00:24:32.298 --- 10.0.0.2 ping statistics --- 00:24:32.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.298 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@461 -- # return 0 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=104469 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 104469 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 104469 ']' 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:32.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:32.298 06:26:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:32.298 [2024-11-27 06:26:51.795620] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:24:32.298 [2024-11-27 06:26:51.795682] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:32.556 [2024-11-27 06:26:51.910952] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:32.556 [2024-11-27 06:26:51.938793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:32.556 [2024-11-27 06:26:51.957962] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:32.556 [2024-11-27 06:26:51.957994] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:32.556 [2024-11-27 06:26:51.958001] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:32.556 [2024-11-27 06:26:51.958006] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:32.556 [2024-11-27 06:26:51.958010] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:32.556 [2024-11-27 06:26:51.958636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:32.556 [2024-11-27 06:26:51.958811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:32.556 [2024-11-27 06:26:51.959082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:33.495 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:33.496 [2024-11-27 06:26:52.708049] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:33.496 Malloc0 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:33.496 [2024-11-27 06:26:52.754494] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:33.496 [2024-11-27 06:26:52.762428] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:33.496 Malloc1 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4421 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=104521 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 104521 /var/tmp/bdevperf.sock 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 104521 ']' 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:33.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:33.496 06:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:34.440 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:34.440 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:34.440 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:34.440 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.440 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:34.440 NVMe0n1 00:24:34.440 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.440 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:34.440 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:34.440 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.440 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:34.440 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.440 1 00:24:34.440 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:34.440 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:34.440 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:34.440 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:34.440 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:34.440 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:34.440 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:34.440 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:34.440 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.440 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:34.440 2024/11/27 06:26:53 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 hostnqn:nqn.2021-09-7.io.spdk:00001 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:24:34.440 request: 00:24:34.440 { 00:24:34.440 "method": "bdev_nvme_attach_controller", 00:24:34.440 "params": { 00:24:34.440 "name": "NVMe0", 00:24:34.440 "trtype": "tcp", 00:24:34.440 "traddr": "10.0.0.3", 00:24:34.440 "adrfam": "ipv4", 00:24:34.440 "trsvcid": "4420", 00:24:34.440 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.440 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:34.440 "hostaddr": "10.0.0.1", 00:24:34.440 "prchk_reftag": false, 00:24:34.440 "prchk_guard": false, 00:24:34.440 "hdgst": false, 00:24:34.440 "ddgst": false, 00:24:34.440 "allow_unrecognized_csi": false 00:24:34.440 } 00:24:34.440 } 00:24:34.440 Got JSON-RPC error response 00:24:34.440 GoRPCClient: error on JSON-RPC call 00:24:34.440 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:34.440 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:34.440 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:34.440 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:34.440 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:34.440 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:34.440 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:34.440 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:34.440 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:34.440 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:34.440 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:34.440 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:34.440 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:34.440 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.440 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:34.440 2024/11/27 06:26:53 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:24:34.440 request: 00:24:34.440 { 00:24:34.440 "method": "bdev_nvme_attach_controller", 00:24:34.440 "params": { 00:24:34.440 "name": "NVMe0", 00:24:34.440 "trtype": "tcp", 00:24:34.440 "traddr": "10.0.0.3", 00:24:34.440 "adrfam": "ipv4", 00:24:34.440 "trsvcid": "4420", 00:24:34.440 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:34.440 "hostaddr": "10.0.0.1", 00:24:34.440 "prchk_reftag": false, 00:24:34.440 "prchk_guard": false, 00:24:34.440 "hdgst": false, 00:24:34.440 "ddgst": false, 00:24:34.440 "allow_unrecognized_csi": false 00:24:34.440 } 00:24:34.440 } 00:24:34.441 Got JSON-RPC error response 00:24:34.441 GoRPCClient: error on JSON-RPC call 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:34.441 2024/11/27 06:26:53 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:24:34.441 request: 00:24:34.441 { 00:24:34.441 "method": "bdev_nvme_attach_controller", 00:24:34.441 "params": { 00:24:34.441 "name": "NVMe0", 00:24:34.441 "trtype": "tcp", 00:24:34.441 "traddr": "10.0.0.3", 00:24:34.441 "adrfam": "ipv4", 00:24:34.441 "trsvcid": "4420", 00:24:34.441 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.441 "hostaddr": "10.0.0.1", 00:24:34.441 "prchk_reftag": false, 00:24:34.441 "prchk_guard": false, 00:24:34.441 "hdgst": false, 00:24:34.441 "ddgst": false, 00:24:34.441 "multipath": "disable", 00:24:34.441 "allow_unrecognized_csi": false 00:24:34.441 } 00:24:34.441 } 00:24:34.441 Got JSON-RPC error response 00:24:34.441 GoRPCClient: error on JSON-RPC call 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:34.441 2024/11/27 06:26:53 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:24:34.441 request: 00:24:34.441 { 00:24:34.441 "method": "bdev_nvme_attach_controller", 00:24:34.441 "params": { 00:24:34.441 "name": "NVMe0", 00:24:34.441 "trtype": "tcp", 00:24:34.441 "traddr": "10.0.0.3", 00:24:34.441 "adrfam": "ipv4", 00:24:34.441 "trsvcid": "4420", 00:24:34.441 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.441 "hostaddr": "10.0.0.1", 00:24:34.441 "prchk_reftag": false, 00:24:34.441 "prchk_guard": false, 00:24:34.441 "hdgst": false, 00:24:34.441 "ddgst": false, 00:24:34.441 "multipath": "failover", 00:24:34.441 "allow_unrecognized_csi": false 00:24:34.441 } 00:24:34.441 } 00:24:34.441 Got JSON-RPC error response 00:24:34.441 GoRPCClient: error on JSON-RPC call 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:34.441 NVMe0n1 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:34.441 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.442 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:34.442 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.442 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:34.442 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.442 06:26:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:34.704 00:24:34.704 06:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.704 06:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:34.704 06:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.704 06:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:34.704 06:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:34.704 06:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.704 06:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:34.704 06:26:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:35.664 { 00:24:35.664 "results": [ 00:24:35.664 { 00:24:35.664 "job": "NVMe0n1", 00:24:35.664 "core_mask": "0x1", 00:24:35.664 "workload": "write", 00:24:35.664 "status": "finished", 00:24:35.664 "queue_depth": 128, 00:24:35.664 "io_size": 4096, 00:24:35.664 "runtime": 1.005661, 00:24:35.664 "iops": 20990.17462146787, 00:24:35.664 "mibps": 81.99286961510887, 00:24:35.664 "io_failed": 0, 00:24:35.664 "io_timeout": 0, 00:24:35.664 "avg_latency_us": 6088.76145836446, 00:24:35.664 "min_latency_us": 3579.273846153846, 00:24:35.664 "max_latency_us": 12098.953846153847 00:24:35.664 } 00:24:35.664 ], 00:24:35.664 "core_count": 1 00:24:35.664 } 00:24:35.664 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:35.664 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.664 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.664 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.664 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n 10.0.0.2 ]] 00:24:35.664 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:35.664 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.664 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.943 nvme1n1 00:24:35.943 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.943 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:24:35.943 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # jq -r '.[].peer_address.traddr' 00:24:35.943 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.943 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.943 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.943 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # [[ 10.0.0.1 == \1\0\.\0\.\0\.\1 ]] 00:24:35.943 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller nvme1 00:24:35.943 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.943 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.943 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.943 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@109 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 00:24:35.943 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.943 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.943 nvme1n1 00:24:35.943 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.943 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:24:35.943 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.943 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.943 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # jq -r '.[].peer_address.traddr' 00:24:35.943 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.943 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # [[ 10.0.0.2 == \1\0\.\0\.\0\.\2 ]] 00:24:35.943 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 104521 00:24:35.943 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 104521 ']' 00:24:35.943 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 104521 00:24:35.943 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:35.943 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:35.943 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104521 00:24:35.943 killing process with pid 104521 00:24:35.943 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:35.943 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:35.943 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104521' 00:24:35.943 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 104521 00:24:35.943 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 104521 00:24:36.205 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:36.205 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.205 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:36.205 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.205 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:36.205 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.205 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:36.205 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.205 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:24:36.205 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:36.205 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:36.205 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:24:36.205 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:24:36.205 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:24:36.205 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:24:36.205 [2024-11-27 06:26:52.854355] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:24:36.205 [2024-11-27 06:26:52.854421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104521 ] 00:24:36.205 [2024-11-27 06:26:52.968188] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:36.205 [2024-11-27 06:26:52.994742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.205 [2024-11-27 06:26:53.013159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.205 [2024-11-27 06:26:54.005045] bdev.c:4926:bdev_name_add: *ERROR*: Bdev name 5160763f-5030-4256-a904-1daea9de3b5f already exists 00:24:36.205 [2024-11-27 06:26:54.005087] bdev.c:8146:bdev_register: *ERROR*: Unable to add uuid:5160763f-5030-4256-a904-1daea9de3b5f alias for bdev NVMe1n1 00:24:36.205 [2024-11-27 06:26:54.005098] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:36.205 Running I/O for 1 seconds... 00:24:36.205 20981.00 IOPS, 81.96 MiB/s 00:24:36.205 Latency(us) 00:24:36.205 [2024-11-27T06:26:55.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:36.205 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:36.205 NVMe0n1 : 1.01 20990.17 81.99 0.00 0.00 6088.76 3579.27 12098.95 00:24:36.205 [2024-11-27T06:26:55.735Z] =================================================================================================================== 00:24:36.205 [2024-11-27T06:26:55.735Z] Total : 20990.17 81.99 0.00 0.00 6088.76 3579.27 12098.95 00:24:36.205 Received shutdown signal, test time was about 1.000000 seconds 00:24:36.205 00:24:36.205 Latency(us) 00:24:36.205 [2024-11-27T06:26:55.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:36.205 [2024-11-27T06:26:55.735Z] =================================================================================================================== 00:24:36.205 [2024-11-27T06:26:55.735Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:36.205 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:24:36.205 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:36.205 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:36.205 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:24:36.205 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:36.205 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:24:36.205 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:36.205 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:24:36.205 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:36.205 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:36.205 rmmod nvme_tcp 00:24:36.205 rmmod nvme_fabrics 00:24:36.205 rmmod nvme_keyring 00:24:36.205 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:36.205 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:24:36.205 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:24:36.205 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 104469 ']' 00:24:36.205 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 104469 00:24:36.205 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 104469 ']' 00:24:36.205 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 104469 00:24:36.205 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:36.205 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:36.205 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104469 00:24:36.205 killing process with pid 104469 00:24:36.205 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:36.205 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:36.205 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104469' 00:24:36.205 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 104469 00:24:36.205 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 104469 00:24:36.467 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:36.467 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:36.467 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:36.467 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:24:36.467 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:24:36.467 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:36.467 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:24:36.467 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:36.467 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:36.467 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:36.467 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:36.467 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:36.467 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:36.467 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:36.467 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:36.467 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:36.467 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:36.467 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:36.467 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:36.467 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:36.467 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:36.467 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:36.467 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:36.467 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.467 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:36.467 06:26:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@300 -- # return 0 00:24:36.730 00:24:36.730 real 0m4.722s 00:24:36.730 user 0m14.732s 00:24:36.730 sys 0m0.872s 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:36.730 ************************************ 00:24:36.730 END TEST nvmf_multicontroller 00:24:36.730 ************************************ 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.730 ************************************ 00:24:36.730 START TEST nvmf_aer 00:24:36.730 ************************************ 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:36.730 * Looking for test storage... 00:24:36.730 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:36.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.730 --rc genhtml_branch_coverage=1 00:24:36.730 --rc genhtml_function_coverage=1 00:24:36.730 --rc genhtml_legend=1 00:24:36.730 --rc geninfo_all_blocks=1 00:24:36.730 --rc geninfo_unexecuted_blocks=1 00:24:36.730 00:24:36.730 ' 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:36.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.730 --rc genhtml_branch_coverage=1 00:24:36.730 --rc genhtml_function_coverage=1 00:24:36.730 --rc genhtml_legend=1 00:24:36.730 --rc geninfo_all_blocks=1 00:24:36.730 --rc geninfo_unexecuted_blocks=1 00:24:36.730 00:24:36.730 ' 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:36.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.730 --rc genhtml_branch_coverage=1 00:24:36.730 --rc genhtml_function_coverage=1 00:24:36.730 --rc genhtml_legend=1 00:24:36.730 --rc geninfo_all_blocks=1 00:24:36.730 --rc geninfo_unexecuted_blocks=1 00:24:36.730 00:24:36.730 ' 00:24:36.730 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:36.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.730 --rc genhtml_branch_coverage=1 00:24:36.731 --rc genhtml_function_coverage=1 00:24:36.731 --rc genhtml_legend=1 00:24:36.731 --rc geninfo_all_blocks=1 00:24:36.731 --rc geninfo_unexecuted_blocks=1 00:24:36.731 00:24:36.731 ' 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:36.731 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:36.731 Cannot find device "nvmf_init_br" 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # true 00:24:36.731 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:36.731 Cannot find device "nvmf_init_br2" 00:24:36.732 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # true 00:24:36.732 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:36.732 Cannot find device "nvmf_tgt_br" 00:24:36.732 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # true 00:24:36.732 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:36.732 Cannot find device "nvmf_tgt_br2" 00:24:36.732 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # true 00:24:36.732 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:36.732 Cannot find device "nvmf_init_br" 00:24:36.732 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # true 00:24:36.732 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:36.732 Cannot find device "nvmf_init_br2" 00:24:36.732 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # true 00:24:36.732 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:36.732 Cannot find device "nvmf_tgt_br" 00:24:36.732 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # true 00:24:36.732 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:36.992 Cannot find device "nvmf_tgt_br2" 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # true 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:36.992 Cannot find device "nvmf_br" 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # true 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:36.992 Cannot find device "nvmf_init_if" 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # true 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:36.992 Cannot find device "nvmf_init_if2" 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # true 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:36.992 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # true 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:36.992 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # true 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:36.992 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:36.992 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:24:36.992 00:24:36.992 --- 10.0.0.3 ping statistics --- 00:24:36.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.992 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:36.992 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:36.992 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:24:36.992 00:24:36.992 --- 10.0.0.4 ping statistics --- 00:24:36.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.992 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:36.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:36.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:24:36.992 00:24:36.992 --- 10.0.0.1 ping statistics --- 00:24:36.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.992 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:36.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:36.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:24:36.992 00:24:36.992 --- 10.0.0.2 ping statistics --- 00:24:36.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.992 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@461 -- # return 0 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:36.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=104826 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 104826 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 104826 ']' 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:36.992 06:26:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:36.992 [2024-11-27 06:26:56.516805] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:24:36.992 [2024-11-27 06:26:56.516870] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:37.253 [2024-11-27 06:26:56.632094] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:37.253 [2024-11-27 06:26:56.657716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:37.253 [2024-11-27 06:26:56.677805] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:37.253 [2024-11-27 06:26:56.677986] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:37.253 [2024-11-27 06:26:56.678051] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:37.253 [2024-11-27 06:26:56.678108] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:37.253 [2024-11-27 06:26:56.678124] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:37.253 [2024-11-27 06:26:56.678827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:37.253 [2024-11-27 06:26:56.678879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:37.253 [2024-11-27 06:26:56.678935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.253 [2024-11-27 06:26:56.678937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.194 [2024-11-27 06:26:57.430490] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.194 Malloc0 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.194 [2024-11-27 06:26:57.485054] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.194 [ 00:24:38.194 { 00:24:38.194 "allow_any_host": true, 00:24:38.194 "hosts": [], 00:24:38.194 "listen_addresses": [], 00:24:38.194 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:38.194 "subtype": "Discovery" 00:24:38.194 }, 00:24:38.194 { 00:24:38.194 "allow_any_host": true, 00:24:38.194 "hosts": [], 00:24:38.194 "listen_addresses": [ 00:24:38.194 { 00:24:38.194 "adrfam": "IPv4", 00:24:38.194 "traddr": "10.0.0.3", 00:24:38.194 "trsvcid": "4420", 00:24:38.194 "trtype": "TCP" 00:24:38.194 } 00:24:38.194 ], 00:24:38.194 "max_cntlid": 65519, 00:24:38.194 "max_namespaces": 2, 00:24:38.194 "min_cntlid": 1, 00:24:38.194 "model_number": "SPDK bdev Controller", 00:24:38.194 "namespaces": [ 00:24:38.194 { 00:24:38.194 "bdev_name": "Malloc0", 00:24:38.194 "name": "Malloc0", 00:24:38.194 "nguid": "62C13EEB0BD04901B5A9F19C27D34E8E", 00:24:38.194 "nsid": 1, 00:24:38.194 "uuid": "62c13eeb-0bd0-4901-b5a9-f19c27d34e8e" 00:24:38.194 } 00:24:38.194 ], 00:24:38.194 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:38.194 "serial_number": "SPDK00000000000001", 00:24:38.194 "subtype": "NVMe" 00:24:38.194 } 00:24:38.194 ] 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=104880 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:38.194 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:38.195 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:24:38.195 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:24:38.195 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:38.195 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:38.195 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:38.195 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:24:38.195 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:38.195 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.195 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.454 Malloc1 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.454 Asynchronous Event Request test 00:24:38.454 Attaching to 10.0.0.3 00:24:38.454 Attached to 10.0.0.3 00:24:38.454 Registering asynchronous event callbacks... 00:24:38.454 Starting namespace attribute notice tests for all controllers... 00:24:38.454 10.0.0.3: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:38.454 aer_cb - Changed Namespace 00:24:38.454 Cleaning up... 00:24:38.454 [ 00:24:38.454 { 00:24:38.454 "allow_any_host": true, 00:24:38.454 "hosts": [], 00:24:38.454 "listen_addresses": [], 00:24:38.454 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:38.454 "subtype": "Discovery" 00:24:38.454 }, 00:24:38.454 { 00:24:38.454 "allow_any_host": true, 00:24:38.454 "hosts": [], 00:24:38.454 "listen_addresses": [ 00:24:38.454 { 00:24:38.454 "adrfam": "IPv4", 00:24:38.454 "traddr": "10.0.0.3", 00:24:38.454 "trsvcid": "4420", 00:24:38.454 "trtype": "TCP" 00:24:38.454 } 00:24:38.454 ], 00:24:38.454 "max_cntlid": 65519, 00:24:38.454 "max_namespaces": 2, 00:24:38.454 "min_cntlid": 1, 00:24:38.454 "model_number": "SPDK bdev Controller", 00:24:38.454 "namespaces": [ 00:24:38.454 { 00:24:38.454 "bdev_name": "Malloc0", 00:24:38.454 "name": "Malloc0", 00:24:38.454 "nguid": "62C13EEB0BD04901B5A9F19C27D34E8E", 00:24:38.454 "nsid": 1, 00:24:38.454 "uuid": "62c13eeb-0bd0-4901-b5a9-f19c27d34e8e" 00:24:38.454 }, 00:24:38.454 { 00:24:38.454 "bdev_name": "Malloc1", 00:24:38.454 "name": "Malloc1", 00:24:38.454 "nguid": "6CC189E39EC34272854F03C03B676BE7", 00:24:38.454 "nsid": 2, 00:24:38.454 "uuid": "6cc189e3-9ec3-4272-854f-03c03b676be7" 00:24:38.454 } 00:24:38.454 ], 00:24:38.454 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:38.454 "serial_number": "SPDK00000000000001", 00:24:38.454 "subtype": "NVMe" 00:24:38.454 } 00:24:38.454 ] 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 104880 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:38.454 rmmod nvme_tcp 00:24:38.454 rmmod nvme_fabrics 00:24:38.454 rmmod nvme_keyring 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 104826 ']' 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 104826 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 104826 ']' 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 104826 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104826 00:24:38.454 killing process with pid 104826 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104826' 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 104826 00:24:38.454 06:26:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 104826 00:24:38.713 06:26:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:38.713 06:26:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:38.713 06:26:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:38.713 06:26:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:24:38.713 06:26:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:24:38.713 06:26:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:38.713 06:26:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:24:38.713 06:26:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:38.713 06:26:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:38.713 06:26:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:38.713 06:26:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:38.713 06:26:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:38.713 06:26:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:38.713 06:26:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:38.713 06:26:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:38.713 06:26:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:38.713 06:26:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:38.713 06:26:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:38.713 06:26:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:38.713 06:26:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:38.713 06:26:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:38.713 06:26:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@300 -- # return 0 00:24:38.974 00:24:38.974 real 0m2.248s 00:24:38.974 user 0m5.860s 00:24:38.974 sys 0m0.530s 00:24:38.974 ************************************ 00:24:38.974 END TEST nvmf_aer 00:24:38.974 ************************************ 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.974 ************************************ 00:24:38.974 START TEST nvmf_async_init 00:24:38.974 ************************************ 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:38.974 * Looking for test storage... 00:24:38.974 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:38.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.974 --rc genhtml_branch_coverage=1 00:24:38.974 --rc genhtml_function_coverage=1 00:24:38.974 --rc genhtml_legend=1 00:24:38.974 --rc geninfo_all_blocks=1 00:24:38.974 --rc geninfo_unexecuted_blocks=1 00:24:38.974 00:24:38.974 ' 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:38.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.974 --rc genhtml_branch_coverage=1 00:24:38.974 --rc genhtml_function_coverage=1 00:24:38.974 --rc genhtml_legend=1 00:24:38.974 --rc geninfo_all_blocks=1 00:24:38.974 --rc geninfo_unexecuted_blocks=1 00:24:38.974 00:24:38.974 ' 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:38.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.974 --rc genhtml_branch_coverage=1 00:24:38.974 --rc genhtml_function_coverage=1 00:24:38.974 --rc genhtml_legend=1 00:24:38.974 --rc geninfo_all_blocks=1 00:24:38.974 --rc geninfo_unexecuted_blocks=1 00:24:38.974 00:24:38.974 ' 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:38.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.974 --rc genhtml_branch_coverage=1 00:24:38.974 --rc genhtml_function_coverage=1 00:24:38.974 --rc genhtml_legend=1 00:24:38.974 --rc geninfo_all_blocks=1 00:24:38.974 --rc geninfo_unexecuted_blocks=1 00:24:38.974 00:24:38.974 ' 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:38.974 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:38.975 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=fcd7c03448164c74ae704e18b6ef7a3d 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:38.975 Cannot find device "nvmf_init_br" 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:24:38.975 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:39.234 Cannot find device "nvmf_init_br2" 00:24:39.234 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:24:39.234 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:39.234 Cannot find device "nvmf_tgt_br" 00:24:39.234 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # true 00:24:39.234 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:39.234 Cannot find device "nvmf_tgt_br2" 00:24:39.234 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # true 00:24:39.234 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:39.234 Cannot find device "nvmf_init_br" 00:24:39.234 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # true 00:24:39.234 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:39.234 Cannot find device "nvmf_init_br2" 00:24:39.234 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # true 00:24:39.234 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:39.234 Cannot find device "nvmf_tgt_br" 00:24:39.234 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # true 00:24:39.234 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:39.234 Cannot find device "nvmf_tgt_br2" 00:24:39.234 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # true 00:24:39.234 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:39.234 Cannot find device "nvmf_br" 00:24:39.234 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # true 00:24:39.234 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:39.234 Cannot find device "nvmf_init_if" 00:24:39.234 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # true 00:24:39.234 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:39.234 Cannot find device "nvmf_init_if2" 00:24:39.234 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # true 00:24:39.234 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:39.234 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:39.234 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # true 00:24:39.234 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:39.234 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:39.234 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # true 00:24:39.234 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:39.234 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:39.234 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:39.235 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:39.235 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:39.235 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:39.235 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:39.235 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:39.235 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:39.235 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:39.235 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:39.235 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:39.235 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:39.235 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:39.235 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:39.235 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:39.235 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:39.235 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:39.235 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:39.235 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:39.235 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:39.235 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:39.235 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:39.235 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:39.235 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:39.235 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:39.235 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:39.235 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:39.235 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:39.235 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:39.494 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:39.494 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:39.494 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:39.494 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:39.494 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:24:39.494 00:24:39.494 --- 10.0.0.3 ping statistics --- 00:24:39.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.494 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:24:39.494 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:39.494 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:39.494 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.032 ms 00:24:39.494 00:24:39.494 --- 10.0.0.4 ping statistics --- 00:24:39.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.494 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:24:39.494 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:39.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:39.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:24:39.494 00:24:39.494 --- 10.0.0.1 ping statistics --- 00:24:39.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.494 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:24:39.494 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:39.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:39.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.041 ms 00:24:39.494 00:24:39.494 --- 10.0.0.2 ping statistics --- 00:24:39.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.494 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:24:39.494 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:39.494 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@461 -- # return 0 00:24:39.494 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:39.494 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:39.494 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:39.494 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:39.494 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:39.494 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:39.494 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:39.494 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:39.494 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:39.494 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:39.494 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.494 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=105101 00:24:39.494 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 105101 00:24:39.494 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 105101 ']' 00:24:39.494 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:39.494 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.494 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:39.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.494 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.494 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:39.494 06:26:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.494 [2024-11-27 06:26:58.835483] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:24:39.494 [2024-11-27 06:26:58.835538] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:39.494 [2024-11-27 06:26:58.949858] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:39.494 [2024-11-27 06:26:58.973670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.494 [2024-11-27 06:26:58.991665] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:39.494 [2024-11-27 06:26:58.991811] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:39.494 [2024-11-27 06:26:58.991848] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:39.494 [2024-11-27 06:26:58.991885] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:39.494 [2024-11-27 06:26:58.991919] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:39.494 [2024-11-27 06:26:58.992147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.439 06:26:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:40.439 06:26:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:24:40.439 06:26:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:40.439 06:26:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:40.439 06:26:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:40.439 06:26:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:40.439 06:26:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:40.439 06:26:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.439 06:26:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:40.439 [2024-11-27 06:26:59.754938] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:40.439 06:26:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.439 06:26:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:40.439 06:26:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.439 06:26:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:40.439 null0 00:24:40.439 06:26:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.439 06:26:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:40.439 06:26:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.439 06:26:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:40.439 06:26:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.439 06:26:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:40.439 06:26:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.439 06:26:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:40.439 06:26:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.439 06:26:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g fcd7c03448164c74ae704e18b6ef7a3d 00:24:40.439 06:26:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.439 06:26:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:40.439 06:26:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.439 06:26:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:40.439 06:26:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.439 06:26:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:40.439 [2024-11-27 06:26:59.795005] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:40.439 06:26:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.439 06:26:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:40.439 06:26:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.439 06:26:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:40.698 nvme0n1 00:24:40.698 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.698 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:40.698 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.698 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:40.698 [ 00:24:40.698 { 00:24:40.698 "aliases": [ 00:24:40.698 "fcd7c034-4816-4c74-ae70-4e18b6ef7a3d" 00:24:40.698 ], 00:24:40.698 "assigned_rate_limits": { 00:24:40.698 "r_mbytes_per_sec": 0, 00:24:40.698 "rw_ios_per_sec": 0, 00:24:40.698 "rw_mbytes_per_sec": 0, 00:24:40.698 "w_mbytes_per_sec": 0 00:24:40.698 }, 00:24:40.698 "block_size": 512, 00:24:40.698 "claimed": false, 00:24:40.698 "driver_specific": { 00:24:40.698 "mp_policy": "active_passive", 00:24:40.698 "nvme": [ 00:24:40.698 { 00:24:40.698 "ctrlr_data": { 00:24:40.698 "ana_reporting": false, 00:24:40.698 "cntlid": 1, 00:24:40.698 "firmware_revision": "25.01", 00:24:40.698 "model_number": "SPDK bdev Controller", 00:24:40.698 "multi_ctrlr": true, 00:24:40.698 "oacs": { 00:24:40.698 "firmware": 0, 00:24:40.698 "format": 0, 00:24:40.698 "ns_manage": 0, 00:24:40.698 "security": 0 00:24:40.698 }, 00:24:40.698 "serial_number": "00000000000000000000", 00:24:40.698 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:40.698 "vendor_id": "0x8086" 00:24:40.698 }, 00:24:40.698 "ns_data": { 00:24:40.698 "can_share": true, 00:24:40.698 "id": 1 00:24:40.698 }, 00:24:40.698 "trid": { 00:24:40.698 "adrfam": "IPv4", 00:24:40.698 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:40.698 "traddr": "10.0.0.3", 00:24:40.698 "trsvcid": "4420", 00:24:40.698 "trtype": "TCP" 00:24:40.698 }, 00:24:40.698 "vs": { 00:24:40.698 "nvme_version": "1.3" 00:24:40.698 } 00:24:40.698 } 00:24:40.698 ] 00:24:40.698 }, 00:24:40.698 "memory_domains": [ 00:24:40.698 { 00:24:40.698 "dma_device_id": "system", 00:24:40.698 "dma_device_type": 1 00:24:40.698 } 00:24:40.698 ], 00:24:40.698 "name": "nvme0n1", 00:24:40.698 "num_blocks": 2097152, 00:24:40.698 "numa_id": -1, 00:24:40.698 "product_name": "NVMe disk", 00:24:40.698 "supported_io_types": { 00:24:40.698 "abort": true, 00:24:40.698 "compare": true, 00:24:40.698 "compare_and_write": true, 00:24:40.698 "copy": true, 00:24:40.698 "flush": true, 00:24:40.698 "get_zone_info": false, 00:24:40.698 "nvme_admin": true, 00:24:40.698 "nvme_io": true, 00:24:40.698 "nvme_io_md": false, 00:24:40.698 "nvme_iov_md": false, 00:24:40.698 "read": true, 00:24:40.698 "reset": true, 00:24:40.698 "seek_data": false, 00:24:40.698 "seek_hole": false, 00:24:40.698 "unmap": false, 00:24:40.698 "write": true, 00:24:40.698 "write_zeroes": true, 00:24:40.698 "zcopy": false, 00:24:40.698 "zone_append": false, 00:24:40.698 "zone_management": false 00:24:40.698 }, 00:24:40.698 "uuid": "fcd7c034-4816-4c74-ae70-4e18b6ef7a3d", 00:24:40.698 "zoned": false 00:24:40.698 } 00:24:40.698 ] 00:24:40.698 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.698 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:40.698 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.698 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:40.698 [2024-11-27 06:27:00.051064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:40.698 [2024-11-27 06:27:00.051125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1378640 (9): Bad file descriptor 00:24:40.698 [2024-11-27 06:27:00.182997] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:24:40.698 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.698 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:40.698 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.698 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:40.698 [ 00:24:40.698 { 00:24:40.698 "aliases": [ 00:24:40.698 "fcd7c034-4816-4c74-ae70-4e18b6ef7a3d" 00:24:40.698 ], 00:24:40.698 "assigned_rate_limits": { 00:24:40.698 "r_mbytes_per_sec": 0, 00:24:40.698 "rw_ios_per_sec": 0, 00:24:40.698 "rw_mbytes_per_sec": 0, 00:24:40.698 "w_mbytes_per_sec": 0 00:24:40.698 }, 00:24:40.698 "block_size": 512, 00:24:40.698 "claimed": false, 00:24:40.698 "driver_specific": { 00:24:40.698 "mp_policy": "active_passive", 00:24:40.698 "nvme": [ 00:24:40.698 { 00:24:40.698 "ctrlr_data": { 00:24:40.698 "ana_reporting": false, 00:24:40.698 "cntlid": 2, 00:24:40.698 "firmware_revision": "25.01", 00:24:40.698 "model_number": "SPDK bdev Controller", 00:24:40.698 "multi_ctrlr": true, 00:24:40.698 "oacs": { 00:24:40.698 "firmware": 0, 00:24:40.698 "format": 0, 00:24:40.698 "ns_manage": 0, 00:24:40.698 "security": 0 00:24:40.698 }, 00:24:40.698 "serial_number": "00000000000000000000", 00:24:40.698 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:40.698 "vendor_id": "0x8086" 00:24:40.698 }, 00:24:40.698 "ns_data": { 00:24:40.698 "can_share": true, 00:24:40.698 "id": 1 00:24:40.698 }, 00:24:40.698 "trid": { 00:24:40.698 "adrfam": "IPv4", 00:24:40.698 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:40.699 "traddr": "10.0.0.3", 00:24:40.699 "trsvcid": "4420", 00:24:40.699 "trtype": "TCP" 00:24:40.699 }, 00:24:40.699 "vs": { 00:24:40.699 "nvme_version": "1.3" 00:24:40.699 } 00:24:40.699 } 00:24:40.699 ] 00:24:40.699 }, 00:24:40.699 "memory_domains": [ 00:24:40.699 { 00:24:40.699 "dma_device_id": "system", 00:24:40.699 "dma_device_type": 1 00:24:40.699 } 00:24:40.699 ], 00:24:40.699 "name": "nvme0n1", 00:24:40.699 "num_blocks": 2097152, 00:24:40.699 "numa_id": -1, 00:24:40.699 "product_name": "NVMe disk", 00:24:40.699 "supported_io_types": { 00:24:40.699 "abort": true, 00:24:40.699 "compare": true, 00:24:40.699 "compare_and_write": true, 00:24:40.699 "copy": true, 00:24:40.699 "flush": true, 00:24:40.699 "get_zone_info": false, 00:24:40.699 "nvme_admin": true, 00:24:40.699 "nvme_io": true, 00:24:40.699 "nvme_io_md": false, 00:24:40.699 "nvme_iov_md": false, 00:24:40.699 "read": true, 00:24:40.699 "reset": true, 00:24:40.699 "seek_data": false, 00:24:40.699 "seek_hole": false, 00:24:40.699 "unmap": false, 00:24:40.699 "write": true, 00:24:40.699 "write_zeroes": true, 00:24:40.699 "zcopy": false, 00:24:40.699 "zone_append": false, 00:24:40.699 "zone_management": false 00:24:40.699 }, 00:24:40.699 "uuid": "fcd7c034-4816-4c74-ae70-4e18b6ef7a3d", 00:24:40.699 "zoned": false 00:24:40.699 } 00:24:40.699 ] 00:24:40.699 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.699 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.699 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.699 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:40.699 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.699 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:40.699 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.47MPDdT5Lf 00:24:40.699 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:40.699 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.47MPDdT5Lf 00:24:40.699 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.47MPDdT5Lf 00:24:40.699 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.699 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 --secure-channel 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:41.050 [2024-11-27 06:27:00.243176] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:41.050 [2024-11-27 06:27:00.243279] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:41.050 [2024-11-27 06:27:00.259177] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:41.050 nvme0n1 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:41.050 [ 00:24:41.050 { 00:24:41.050 "aliases": [ 00:24:41.050 "fcd7c034-4816-4c74-ae70-4e18b6ef7a3d" 00:24:41.050 ], 00:24:41.050 "assigned_rate_limits": { 00:24:41.050 "r_mbytes_per_sec": 0, 00:24:41.050 "rw_ios_per_sec": 0, 00:24:41.050 "rw_mbytes_per_sec": 0, 00:24:41.050 "w_mbytes_per_sec": 0 00:24:41.050 }, 00:24:41.050 "block_size": 512, 00:24:41.050 "claimed": false, 00:24:41.050 "driver_specific": { 00:24:41.050 "mp_policy": "active_passive", 00:24:41.050 "nvme": [ 00:24:41.050 { 00:24:41.050 "ctrlr_data": { 00:24:41.050 "ana_reporting": false, 00:24:41.050 "cntlid": 3, 00:24:41.050 "firmware_revision": "25.01", 00:24:41.050 "model_number": "SPDK bdev Controller", 00:24:41.050 "multi_ctrlr": true, 00:24:41.050 "oacs": { 00:24:41.050 "firmware": 0, 00:24:41.050 "format": 0, 00:24:41.050 "ns_manage": 0, 00:24:41.050 "security": 0 00:24:41.050 }, 00:24:41.050 "serial_number": "00000000000000000000", 00:24:41.050 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:41.050 "vendor_id": "0x8086" 00:24:41.050 }, 00:24:41.050 "ns_data": { 00:24:41.050 "can_share": true, 00:24:41.050 "id": 1 00:24:41.050 }, 00:24:41.050 "trid": { 00:24:41.050 "adrfam": "IPv4", 00:24:41.050 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:41.050 "traddr": "10.0.0.3", 00:24:41.050 "trsvcid": "4421", 00:24:41.050 "trtype": "TCP" 00:24:41.050 }, 00:24:41.050 "vs": { 00:24:41.050 "nvme_version": "1.3" 00:24:41.050 } 00:24:41.050 } 00:24:41.050 ] 00:24:41.050 }, 00:24:41.050 "memory_domains": [ 00:24:41.050 { 00:24:41.050 "dma_device_id": "system", 00:24:41.050 "dma_device_type": 1 00:24:41.050 } 00:24:41.050 ], 00:24:41.050 "name": "nvme0n1", 00:24:41.050 "num_blocks": 2097152, 00:24:41.050 "numa_id": -1, 00:24:41.050 "product_name": "NVMe disk", 00:24:41.050 "supported_io_types": { 00:24:41.050 "abort": true, 00:24:41.050 "compare": true, 00:24:41.050 "compare_and_write": true, 00:24:41.050 "copy": true, 00:24:41.050 "flush": true, 00:24:41.050 "get_zone_info": false, 00:24:41.050 "nvme_admin": true, 00:24:41.050 "nvme_io": true, 00:24:41.050 "nvme_io_md": false, 00:24:41.050 "nvme_iov_md": false, 00:24:41.050 "read": true, 00:24:41.050 "reset": true, 00:24:41.050 "seek_data": false, 00:24:41.050 "seek_hole": false, 00:24:41.050 "unmap": false, 00:24:41.050 "write": true, 00:24:41.050 "write_zeroes": true, 00:24:41.050 "zcopy": false, 00:24:41.050 "zone_append": false, 00:24:41.050 "zone_management": false 00:24:41.050 }, 00:24:41.050 "uuid": "fcd7c034-4816-4c74-ae70-4e18b6ef7a3d", 00:24:41.050 "zoned": false 00:24:41.050 } 00:24:41.050 ] 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.47MPDdT5Lf 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:41.050 rmmod nvme_tcp 00:24:41.050 rmmod nvme_fabrics 00:24:41.050 rmmod nvme_keyring 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 105101 ']' 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 105101 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 105101 ']' 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 105101 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:41.050 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105101 00:24:41.051 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:41.051 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:41.051 killing process with pid 105101 00:24:41.051 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105101' 00:24:41.051 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 105101 00:24:41.051 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 105101 00:24:41.309 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:41.309 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:41.309 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:41.309 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:24:41.309 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:24:41.309 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:41.309 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:24:41.309 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:41.309 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:41.309 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:41.309 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:41.309 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:41.309 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:41.309 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:41.309 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:41.309 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:41.309 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:41.309 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:41.309 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:41.309 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:41.309 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:41.309 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:41.309 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:41.309 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.309 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:41.309 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.309 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@300 -- # return 0 00:24:41.309 00:24:41.309 real 0m2.481s 00:24:41.309 user 0m2.160s 00:24:41.309 sys 0m0.512s 00:24:41.309 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:41.309 ************************************ 00:24:41.309 END TEST nvmf_async_init 00:24:41.309 ************************************ 00:24:41.309 06:27:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:41.309 06:27:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:41.309 06:27:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:41.309 06:27:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:41.309 06:27:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.309 ************************************ 00:24:41.309 START TEST dma 00:24:41.309 ************************************ 00:24:41.309 06:27:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:41.568 * Looking for test storage... 00:24:41.568 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:41.568 06:27:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:41.568 06:27:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:24:41.568 06:27:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:41.568 06:27:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:41.568 06:27:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:41.568 06:27:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:41.568 06:27:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:41.568 06:27:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:24:41.568 06:27:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:24:41.568 06:27:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:24:41.568 06:27:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:24:41.568 06:27:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:24:41.568 06:27:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:24:41.568 06:27:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:24:41.568 06:27:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:41.568 06:27:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:24:41.568 06:27:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:24:41.568 06:27:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:41.568 06:27:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:41.568 06:27:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:24:41.568 06:27:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:24:41.568 06:27:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:41.568 06:27:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:24:41.568 06:27:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:24:41.568 06:27:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:24:41.568 06:27:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:24:41.568 06:27:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:41.568 06:27:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:24:41.568 06:27:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:24:41.568 06:27:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:41.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.569 --rc genhtml_branch_coverage=1 00:24:41.569 --rc genhtml_function_coverage=1 00:24:41.569 --rc genhtml_legend=1 00:24:41.569 --rc geninfo_all_blocks=1 00:24:41.569 --rc geninfo_unexecuted_blocks=1 00:24:41.569 00:24:41.569 ' 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:41.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.569 --rc genhtml_branch_coverage=1 00:24:41.569 --rc genhtml_function_coverage=1 00:24:41.569 --rc genhtml_legend=1 00:24:41.569 --rc geninfo_all_blocks=1 00:24:41.569 --rc geninfo_unexecuted_blocks=1 00:24:41.569 00:24:41.569 ' 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:41.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.569 --rc genhtml_branch_coverage=1 00:24:41.569 --rc genhtml_function_coverage=1 00:24:41.569 --rc genhtml_legend=1 00:24:41.569 --rc geninfo_all_blocks=1 00:24:41.569 --rc geninfo_unexecuted_blocks=1 00:24:41.569 00:24:41.569 ' 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:41.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.569 --rc genhtml_branch_coverage=1 00:24:41.569 --rc genhtml_function_coverage=1 00:24:41.569 --rc genhtml_legend=1 00:24:41.569 --rc geninfo_all_blocks=1 00:24:41.569 --rc geninfo_unexecuted_blocks=1 00:24:41.569 00:24:41.569 ' 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:41.569 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:24:41.569 00:24:41.569 real 0m0.156s 00:24:41.569 user 0m0.089s 00:24:41.569 sys 0m0.078s 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:41.569 06:27:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:24:41.569 ************************************ 00:24:41.569 END TEST dma 00:24:41.569 ************************************ 00:24:41.569 06:27:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:41.569 06:27:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:41.569 06:27:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:41.569 06:27:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.569 ************************************ 00:24:41.569 START TEST nvmf_identify 00:24:41.569 ************************************ 00:24:41.569 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:41.569 * Looking for test storage... 00:24:41.569 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:41.569 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:41.569 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:24:41.569 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:41.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.828 --rc genhtml_branch_coverage=1 00:24:41.828 --rc genhtml_function_coverage=1 00:24:41.828 --rc genhtml_legend=1 00:24:41.828 --rc geninfo_all_blocks=1 00:24:41.828 --rc geninfo_unexecuted_blocks=1 00:24:41.828 00:24:41.828 ' 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:41.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.828 --rc genhtml_branch_coverage=1 00:24:41.828 --rc genhtml_function_coverage=1 00:24:41.828 --rc genhtml_legend=1 00:24:41.828 --rc geninfo_all_blocks=1 00:24:41.828 --rc geninfo_unexecuted_blocks=1 00:24:41.828 00:24:41.828 ' 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:41.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.828 --rc genhtml_branch_coverage=1 00:24:41.828 --rc genhtml_function_coverage=1 00:24:41.828 --rc genhtml_legend=1 00:24:41.828 --rc geninfo_all_blocks=1 00:24:41.828 --rc geninfo_unexecuted_blocks=1 00:24:41.828 00:24:41.828 ' 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:41.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.828 --rc genhtml_branch_coverage=1 00:24:41.828 --rc genhtml_function_coverage=1 00:24:41.828 --rc genhtml_legend=1 00:24:41.828 --rc geninfo_all_blocks=1 00:24:41.828 --rc geninfo_unexecuted_blocks=1 00:24:41.828 00:24:41.828 ' 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:41.828 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:41.828 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:41.829 Cannot find device "nvmf_init_br" 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:41.829 Cannot find device "nvmf_init_br2" 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:41.829 Cannot find device "nvmf_tgt_br" 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:41.829 Cannot find device "nvmf_tgt_br2" 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:41.829 Cannot find device "nvmf_init_br" 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:41.829 Cannot find device "nvmf_init_br2" 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:41.829 Cannot find device "nvmf_tgt_br" 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:41.829 Cannot find device "nvmf_tgt_br2" 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:41.829 Cannot find device "nvmf_br" 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:41.829 Cannot find device "nvmf_init_if" 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:41.829 Cannot find device "nvmf_init_if2" 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:41.829 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:41.829 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:41.829 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:42.088 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:42.088 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:24:42.088 00:24:42.088 --- 10.0.0.3 ping statistics --- 00:24:42.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.088 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:42.088 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:42.088 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:24:42.088 00:24:42.088 --- 10.0.0.4 ping statistics --- 00:24:42.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.088 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:42.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:42.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:24:42.088 00:24:42.088 --- 10.0.0.1 ping statistics --- 00:24:42.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.088 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:42.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:42.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:24:42.088 00:24:42.088 --- 10.0.0.2 ping statistics --- 00:24:42.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.088 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=105430 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 105430 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 105430 ']' 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:42.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:42.088 06:27:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:42.088 [2024-11-27 06:27:01.523345] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:24:42.088 [2024-11-27 06:27:01.523405] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:42.347 [2024-11-27 06:27:01.641881] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:42.347 [2024-11-27 06:27:01.665821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:42.347 [2024-11-27 06:27:01.684544] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:42.347 [2024-11-27 06:27:01.684590] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:42.347 [2024-11-27 06:27:01.684595] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:42.347 [2024-11-27 06:27:01.684599] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:42.347 [2024-11-27 06:27:01.684603] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:42.347 [2024-11-27 06:27:01.685317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:42.347 [2024-11-27 06:27:01.685672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:42.347 [2024-11-27 06:27:01.685736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.347 [2024-11-27 06:27:01.685736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:42.913 06:27:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:42.913 06:27:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:24:42.913 06:27:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:42.913 06:27:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.913 06:27:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:42.913 [2024-11-27 06:27:02.356345] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:42.913 06:27:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.913 06:27:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:42.913 06:27:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:42.913 06:27:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:42.913 06:27:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:42.913 06:27:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.913 06:27:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:42.913 Malloc0 00:24:42.913 06:27:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.913 06:27:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:42.913 06:27:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.913 06:27:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:42.913 06:27:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.173 06:27:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:43.173 06:27:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.173 06:27:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:43.173 06:27:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.173 06:27:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:43.173 06:27:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.173 06:27:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:43.173 [2024-11-27 06:27:02.460837] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:43.173 06:27:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.173 06:27:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:24:43.173 06:27:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.173 06:27:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:43.173 06:27:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.173 06:27:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:43.173 06:27:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.173 06:27:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:43.173 [ 00:24:43.173 { 00:24:43.173 "allow_any_host": true, 00:24:43.173 "hosts": [], 00:24:43.173 "listen_addresses": [ 00:24:43.173 { 00:24:43.173 "adrfam": "IPv4", 00:24:43.173 "traddr": "10.0.0.3", 00:24:43.173 "trsvcid": "4420", 00:24:43.173 "trtype": "TCP" 00:24:43.173 } 00:24:43.173 ], 00:24:43.173 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:43.173 "subtype": "Discovery" 00:24:43.173 }, 00:24:43.173 { 00:24:43.173 "allow_any_host": true, 00:24:43.173 "hosts": [], 00:24:43.173 "listen_addresses": [ 00:24:43.173 { 00:24:43.173 "adrfam": "IPv4", 00:24:43.173 "traddr": "10.0.0.3", 00:24:43.173 "trsvcid": "4420", 00:24:43.173 "trtype": "TCP" 00:24:43.173 } 00:24:43.173 ], 00:24:43.173 "max_cntlid": 65519, 00:24:43.173 "max_namespaces": 32, 00:24:43.173 "min_cntlid": 1, 00:24:43.173 "model_number": "SPDK bdev Controller", 00:24:43.173 "namespaces": [ 00:24:43.173 { 00:24:43.173 "bdev_name": "Malloc0", 00:24:43.173 "eui64": "ABCDEF0123456789", 00:24:43.173 "name": "Malloc0", 00:24:43.173 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:43.173 "nsid": 1, 00:24:43.173 "uuid": "0b3aa5c8-2449-4085-983e-3c16b9a74e88" 00:24:43.173 } 00:24:43.173 ], 00:24:43.173 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:43.173 "serial_number": "SPDK00000000000001", 00:24:43.173 "subtype": "NVMe" 00:24:43.173 } 00:24:43.173 ] 00:24:43.173 06:27:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.173 06:27:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:43.173 [2024-11-27 06:27:02.504179] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:24:43.173 [2024-11-27 06:27:02.504230] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105483 ] 00:24:43.173 [2024-11-27 06:27:02.617307] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:43.173 [2024-11-27 06:27:02.648568] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:24:43.173 [2024-11-27 06:27:02.648624] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:43.173 [2024-11-27 06:27:02.648627] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:43.173 [2024-11-27 06:27:02.648638] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:43.173 [2024-11-27 06:27:02.648645] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:43.173 [2024-11-27 06:27:02.648831] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:24:43.173 [2024-11-27 06:27:02.648857] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x743260 0 00:24:43.173 [2024-11-27 06:27:02.655920] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:43.173 [2024-11-27 06:27:02.655931] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:43.173 [2024-11-27 06:27:02.655934] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:43.173 [2024-11-27 06:27:02.655936] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:43.173 [2024-11-27 06:27:02.655955] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.173 [2024-11-27 06:27:02.655959] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.173 [2024-11-27 06:27:02.655962] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x743260) 00:24:43.173 [2024-11-27 06:27:02.655971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:43.173 [2024-11-27 06:27:02.655989] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x796c40, cid 0, qid 0 00:24:43.173 [2024-11-27 06:27:02.663916] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.173 [2024-11-27 06:27:02.663928] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.173 [2024-11-27 06:27:02.663930] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.173 [2024-11-27 06:27:02.663932] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x796c40) on tqpair=0x743260 00:24:43.173 [2024-11-27 06:27:02.663940] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:43.173 [2024-11-27 06:27:02.663945] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:24:43.173 [2024-11-27 06:27:02.663949] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:24:43.173 [2024-11-27 06:27:02.663958] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.173 [2024-11-27 06:27:02.663961] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.173 [2024-11-27 06:27:02.663963] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x743260) 00:24:43.173 [2024-11-27 06:27:02.663968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.173 [2024-11-27 06:27:02.663983] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x796c40, cid 0, qid 0 00:24:43.173 [2024-11-27 06:27:02.664024] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.173 [2024-11-27 06:27:02.664028] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.173 [2024-11-27 06:27:02.664030] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.173 [2024-11-27 06:27:02.664032] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x796c40) on tqpair=0x743260 00:24:43.173 [2024-11-27 06:27:02.664035] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:24:43.173 [2024-11-27 06:27:02.664039] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:24:43.173 [2024-11-27 06:27:02.664043] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.173 [2024-11-27 06:27:02.664045] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.173 [2024-11-27 06:27:02.664047] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x743260) 00:24:43.173 [2024-11-27 06:27:02.664051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.173 [2024-11-27 06:27:02.664059] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x796c40, cid 0, qid 0 00:24:43.173 [2024-11-27 06:27:02.664093] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.173 [2024-11-27 06:27:02.664097] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.173 [2024-11-27 06:27:02.664098] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.173 [2024-11-27 06:27:02.664100] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x796c40) on tqpair=0x743260 00:24:43.173 [2024-11-27 06:27:02.664103] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:24:43.173 [2024-11-27 06:27:02.664108] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:43.173 [2024-11-27 06:27:02.664111] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.173 [2024-11-27 06:27:02.664114] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.173 [2024-11-27 06:27:02.664115] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x743260) 00:24:43.174 [2024-11-27 06:27:02.664120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.174 [2024-11-27 06:27:02.664128] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x796c40, cid 0, qid 0 00:24:43.174 [2024-11-27 06:27:02.664158] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.174 [2024-11-27 06:27:02.664162] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.174 [2024-11-27 06:27:02.664164] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.174 [2024-11-27 06:27:02.664166] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x796c40) on tqpair=0x743260 00:24:43.174 [2024-11-27 06:27:02.664169] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:43.174 [2024-11-27 06:27:02.664174] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.174 [2024-11-27 06:27:02.664175] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.174 [2024-11-27 06:27:02.664177] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x743260) 00:24:43.174 [2024-11-27 06:27:02.664181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.174 [2024-11-27 06:27:02.664189] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x796c40, cid 0, qid 0 00:24:43.174 [2024-11-27 06:27:02.664224] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.174 [2024-11-27 06:27:02.664228] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.174 [2024-11-27 06:27:02.664229] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.174 [2024-11-27 06:27:02.664231] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x796c40) on tqpair=0x743260 00:24:43.174 [2024-11-27 06:27:02.664234] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:43.174 [2024-11-27 06:27:02.664237] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:43.174 [2024-11-27 06:27:02.664241] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:43.174 [2024-11-27 06:27:02.664344] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:24:43.174 [2024-11-27 06:27:02.664347] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:43.174 [2024-11-27 06:27:02.664352] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.174 [2024-11-27 06:27:02.664354] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.174 [2024-11-27 06:27:02.664356] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x743260) 00:24:43.174 [2024-11-27 06:27:02.664360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.174 [2024-11-27 06:27:02.664368] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x796c40, cid 0, qid 0 00:24:43.174 [2024-11-27 06:27:02.664406] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.174 [2024-11-27 06:27:02.664410] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.174 [2024-11-27 06:27:02.664411] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.174 [2024-11-27 06:27:02.664413] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x796c40) on tqpair=0x743260 00:24:43.174 [2024-11-27 06:27:02.664416] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:43.174 [2024-11-27 06:27:02.664421] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.174 [2024-11-27 06:27:02.664423] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.174 [2024-11-27 06:27:02.664425] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x743260) 00:24:43.174 [2024-11-27 06:27:02.664429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.174 [2024-11-27 06:27:02.664437] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x796c40, cid 0, qid 0 00:24:43.174 [2024-11-27 06:27:02.664468] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.174 [2024-11-27 06:27:02.664472] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.174 [2024-11-27 06:27:02.664474] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.174 [2024-11-27 06:27:02.664476] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x796c40) on tqpair=0x743260 00:24:43.174 [2024-11-27 06:27:02.664479] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:43.174 [2024-11-27 06:27:02.664481] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:43.174 [2024-11-27 06:27:02.664485] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:24:43.174 [2024-11-27 06:27:02.664491] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:43.174 [2024-11-27 06:27:02.664497] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.174 [2024-11-27 06:27:02.664500] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x743260) 00:24:43.174 [2024-11-27 06:27:02.664504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.174 [2024-11-27 06:27:02.664512] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x796c40, cid 0, qid 0 00:24:43.174 [2024-11-27 06:27:02.664563] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:43.174 [2024-11-27 06:27:02.664566] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:43.174 [2024-11-27 06:27:02.664568] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:43.174 [2024-11-27 06:27:02.664570] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x743260): datao=0, datal=4096, cccid=0 00:24:43.174 [2024-11-27 06:27:02.664573] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x796c40) on tqpair(0x743260): expected_datao=0, payload_size=4096 00:24:43.174 [2024-11-27 06:27:02.664575] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.174 [2024-11-27 06:27:02.664588] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:43.174 [2024-11-27 06:27:02.664590] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:43.174 [2024-11-27 06:27:02.664595] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.174 [2024-11-27 06:27:02.664599] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.174 [2024-11-27 06:27:02.664600] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.174 [2024-11-27 06:27:02.664602] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x796c40) on tqpair=0x743260 00:24:43.174 [2024-11-27 06:27:02.664607] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:24:43.174 [2024-11-27 06:27:02.664609] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:24:43.174 [2024-11-27 06:27:02.664612] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:24:43.174 [2024-11-27 06:27:02.664616] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:24:43.174 [2024-11-27 06:27:02.664619] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:24:43.174 [2024-11-27 06:27:02.664621] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:24:43.174 [2024-11-27 06:27:02.664626] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:43.174 [2024-11-27 06:27:02.664629] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.174 [2024-11-27 06:27:02.664631] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.174 [2024-11-27 06:27:02.664633] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x743260) 00:24:43.174 [2024-11-27 06:27:02.664637] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:43.174 [2024-11-27 06:27:02.664647] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x796c40, cid 0, qid 0 00:24:43.174 [2024-11-27 06:27:02.664682] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.174 [2024-11-27 06:27:02.664686] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.174 [2024-11-27 06:27:02.664687] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.174 [2024-11-27 06:27:02.664689] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x796c40) on tqpair=0x743260 00:24:43.174 [2024-11-27 06:27:02.664694] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.174 [2024-11-27 06:27:02.664696] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.174 [2024-11-27 06:27:02.664698] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x743260) 00:24:43.174 [2024-11-27 06:27:02.664703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.174 [2024-11-27 06:27:02.664706] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.174 [2024-11-27 06:27:02.664708] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.174 [2024-11-27 06:27:02.664710] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x743260) 00:24:43.174 [2024-11-27 06:27:02.664713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.174 [2024-11-27 06:27:02.664717] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.174 [2024-11-27 06:27:02.664719] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.174 [2024-11-27 06:27:02.664720] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x743260) 00:24:43.174 [2024-11-27 06:27:02.664724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.174 [2024-11-27 06:27:02.664727] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.174 [2024-11-27 06:27:02.664729] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.174 [2024-11-27 06:27:02.664731] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.174 [2024-11-27 06:27:02.664734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.174 [2024-11-27 06:27:02.664736] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:43.174 [2024-11-27 06:27:02.664740] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:43.174 [2024-11-27 06:27:02.664744] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.174 [2024-11-27 06:27:02.664746] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x743260) 00:24:43.174 [2024-11-27 06:27:02.664750] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.175 [2024-11-27 06:27:02.664761] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x796c40, cid 0, qid 0 00:24:43.175 [2024-11-27 06:27:02.664764] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x796dc0, cid 1, qid 0 00:24:43.175 [2024-11-27 06:27:02.664767] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x796f40, cid 2, qid 0 00:24:43.175 [2024-11-27 06:27:02.664770] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.175 [2024-11-27 06:27:02.664772] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x797240, cid 4, qid 0 00:24:43.175 [2024-11-27 06:27:02.664841] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.175 [2024-11-27 06:27:02.664844] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.175 [2024-11-27 06:27:02.664846] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.175 [2024-11-27 06:27:02.664848] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x797240) on tqpair=0x743260 00:24:43.175 [2024-11-27 06:27:02.664851] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:24:43.175 [2024-11-27 06:27:02.664853] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:24:43.175 [2024-11-27 06:27:02.664859] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.175 [2024-11-27 06:27:02.664861] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x743260) 00:24:43.175 [2024-11-27 06:27:02.664865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.175 [2024-11-27 06:27:02.664873] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x797240, cid 4, qid 0 00:24:43.175 [2024-11-27 06:27:02.664916] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:43.175 [2024-11-27 06:27:02.664920] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:43.175 [2024-11-27 06:27:02.664922] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:43.175 [2024-11-27 06:27:02.664924] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x743260): datao=0, datal=4096, cccid=4 00:24:43.175 [2024-11-27 06:27:02.664926] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x797240) on tqpair(0x743260): expected_datao=0, payload_size=4096 00:24:43.175 [2024-11-27 06:27:02.664928] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.175 [2024-11-27 06:27:02.664933] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:43.175 [2024-11-27 06:27:02.664934] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:43.175 [2024-11-27 06:27:02.664939] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.175 [2024-11-27 06:27:02.664943] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.175 [2024-11-27 06:27:02.664944] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.175 [2024-11-27 06:27:02.664946] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x797240) on tqpair=0x743260 00:24:43.175 [2024-11-27 06:27:02.664954] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:24:43.175 [2024-11-27 06:27:02.664968] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.175 [2024-11-27 06:27:02.664971] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x743260) 00:24:43.175 [2024-11-27 06:27:02.664975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.175 [2024-11-27 06:27:02.664979] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.175 [2024-11-27 06:27:02.664981] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.175 [2024-11-27 06:27:02.664983] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x743260) 00:24:43.175 [2024-11-27 06:27:02.664986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.175 [2024-11-27 06:27:02.664998] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x797240, cid 4, qid 0 00:24:43.175 [2024-11-27 06:27:02.665001] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7973c0, cid 5, qid 0 00:24:43.175 [2024-11-27 06:27:02.665068] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:43.175 [2024-11-27 06:27:02.665071] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:43.175 [2024-11-27 06:27:02.665073] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:43.175 [2024-11-27 06:27:02.665075] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x743260): datao=0, datal=1024, cccid=4 00:24:43.175 [2024-11-27 06:27:02.665078] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x797240) on tqpair(0x743260): expected_datao=0, payload_size=1024 00:24:43.175 [2024-11-27 06:27:02.665080] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.175 [2024-11-27 06:27:02.665084] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:43.175 [2024-11-27 06:27:02.665086] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:43.175 [2024-11-27 06:27:02.665089] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.175 [2024-11-27 06:27:02.665093] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.175 [2024-11-27 06:27:02.665095] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.175 [2024-11-27 06:27:02.665097] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7973c0) on tqpair=0x743260 00:24:43.442 [2024-11-27 06:27:02.706936] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.442 [2024-11-27 06:27:02.706949] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.442 [2024-11-27 06:27:02.706951] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.442 [2024-11-27 06:27:02.706954] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x797240) on tqpair=0x743260 00:24:43.442 [2024-11-27 06:27:02.706965] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.442 [2024-11-27 06:27:02.706967] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x743260) 00:24:43.442 [2024-11-27 06:27:02.706972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.442 [2024-11-27 06:27:02.706987] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x797240, cid 4, qid 0 00:24:43.442 [2024-11-27 06:27:02.707032] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:43.442 [2024-11-27 06:27:02.707036] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:43.442 [2024-11-27 06:27:02.707038] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:43.442 [2024-11-27 06:27:02.707040] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x743260): datao=0, datal=3072, cccid=4 00:24:43.442 [2024-11-27 06:27:02.707042] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x797240) on tqpair(0x743260): expected_datao=0, payload_size=3072 00:24:43.442 [2024-11-27 06:27:02.707045] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.442 [2024-11-27 06:27:02.707049] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:43.442 [2024-11-27 06:27:02.707051] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:43.442 [2024-11-27 06:27:02.707056] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.442 [2024-11-27 06:27:02.707059] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.442 [2024-11-27 06:27:02.707061] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.442 [2024-11-27 06:27:02.707063] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x797240) on tqpair=0x743260 00:24:43.442 [2024-11-27 06:27:02.707068] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.442 [2024-11-27 06:27:02.707071] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x743260) 00:24:43.442 [2024-11-27 06:27:02.707075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.442 [2024-11-27 06:27:02.707085] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x797240, cid 4, qid 0 00:24:43.442 [2024-11-27 06:27:02.707127] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:43.442 [2024-11-27 06:27:02.707130] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:43.442 [2024-11-27 06:27:02.707132] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:43.442 [2024-11-27 06:27:02.707134] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x743260): datao=0, datal=8, cccid=4 00:24:43.442 [2024-11-27 06:27:02.707136] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x797240) on tqpair(0x743260): expected_datao=0, payload_size=8 00:24:43.442 [2024-11-27 06:27:02.707139] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.442 [2024-11-27 06:27:02.707143] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:43.442 [2024-11-27 06:27:02.707144] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:43.442 ===================================================== 00:24:43.442 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:43.442 ===================================================== 00:24:43.442 Controller Capabilities/Features 00:24:43.442 ================================ 00:24:43.442 Vendor ID: 0000 00:24:43.442 Subsystem Vendor ID: 0000 00:24:43.442 Serial Number: .................... 00:24:43.442 Model Number: ........................................ 00:24:43.442 Firmware Version: 25.01 00:24:43.442 Recommended Arb Burst: 0 00:24:43.442 IEEE OUI Identifier: 00 00 00 00:24:43.442 Multi-path I/O 00:24:43.442 May have multiple subsystem ports: No 00:24:43.442 May have multiple controllers: No 00:24:43.442 Associated with SR-IOV VF: No 00:24:43.442 Max Data Transfer Size: 131072 00:24:43.442 Max Number of Namespaces: 0 00:24:43.442 Max Number of I/O Queues: 1024 00:24:43.442 NVMe Specification Version (VS): 1.3 00:24:43.442 NVMe Specification Version (Identify): 1.3 00:24:43.442 Maximum Queue Entries: 128 00:24:43.442 Contiguous Queues Required: Yes 00:24:43.442 Arbitration Mechanisms Supported 00:24:43.442 Weighted Round Robin: Not Supported 00:24:43.442 Vendor Specific: Not Supported 00:24:43.442 Reset Timeout: 15000 ms 00:24:43.442 Doorbell Stride: 4 bytes 00:24:43.442 NVM Subsystem Reset: Not Supported 00:24:43.442 Command Sets Supported 00:24:43.442 NVM Command Set: Supported 00:24:43.442 Boot Partition: Not Supported 00:24:43.442 Memory Page Size Minimum: 4096 bytes 00:24:43.442 Memory Page Size Maximum: 4096 bytes 00:24:43.442 Persistent Memory Region: Not Supported 00:24:43.442 Optional Asynchronous Events Supported 00:24:43.442 Namespace Attribute Notices: Not Supported 00:24:43.442 Firmware Activation Notices: Not Supported 00:24:43.442 ANA Change Notices: Not Supported 00:24:43.442 PLE Aggregate Log Change Notices: Not Supported 00:24:43.442 LBA Status Info Alert Notices: Not Supported 00:24:43.442 EGE Aggregate Log Change Notices: Not Supported 00:24:43.442 Normal NVM Subsystem Shutdown event: Not Supported 00:24:43.442 Zone Descriptor Change Notices: Not Supported 00:24:43.442 Discovery Log Change Notices: Supported 00:24:43.442 Controller Attributes 00:24:43.442 128-bit Host Identifier: Not Supported 00:24:43.442 Non-Operational Permissive Mode: Not Supported 00:24:43.442 NVM Sets: Not Supported 00:24:43.442 Read Recovery Levels: Not Supported 00:24:43.442 Endurance Groups: Not Supported 00:24:43.442 Predictable Latency Mode: Not Supported 00:24:43.442 Traffic Based Keep ALive: Not Supported 00:24:43.442 Namespace Granularity: Not Supported 00:24:43.442 SQ Associations: Not Supported 00:24:43.442 UUID List: Not Supported 00:24:43.442 Multi-Domain Subsystem: Not Supported 00:24:43.442 Fixed Capacity Management: Not Supported 00:24:43.442 Variable Capacity Management: Not Supported 00:24:43.442 Delete Endurance Group: Not Supported 00:24:43.442 Delete NVM Set: Not Supported 00:24:43.442 Extended LBA Formats Supported: Not Supported 00:24:43.442 Flexible Data Placement Supported: Not Supported 00:24:43.442 00:24:43.442 Controller Memory Buffer Support 00:24:43.442 ================================ 00:24:43.442 Supported: No 00:24:43.443 00:24:43.443 Persistent Memory Region Support 00:24:43.443 ================================ 00:24:43.443 Supported: No 00:24:43.443 00:24:43.443 Admin Command Set Attributes 00:24:43.443 ============================ 00:24:43.443 Security Send/Receive: Not Supported 00:24:43.443 Format NVM: Not Supported 00:24:43.443 Firmware Activate/Download: Not Supported 00:24:43.443 Namespace Management: Not Supported 00:24:43.443 Device Self-Test: Not Supported 00:24:43.443 Directives: Not Supported 00:24:43.443 NVMe-MI: Not Supported 00:24:43.443 Virtualization Management: Not Supported 00:24:43.443 Doorbell Buffer Config: Not Supported 00:24:43.443 Get LBA Status Capability: Not Supported 00:24:43.443 Command & Feature Lockdown Capability: Not Supported 00:24:43.443 Abort Command Limit: 1 00:24:43.443 Async Event Request Limit: 4 00:24:43.443 Number of Firmware Slots: N/A 00:24:43.443 Firmware Slot 1 Read-Only: N/A 00:24:43.443 Firmware Activation Without Reset: N/A 00:24:43.443 Multiple Update Detection Support: N/A 00:24:43.443 Firmware Update Granularity: No Information Provided 00:24:43.443 Per-Namespace SMART Log: No 00:24:43.443 Asymmetric Namespace Access Log Page: Not Supported 00:24:43.443 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:43.443 Command Effects Log Page: Not Supported 00:24:43.443 Get Log Page Extended Data: Supported 00:24:43.443 Telemetry Log Pages: Not Supported 00:24:43.443 Persistent Event Log Pages: Not Supported 00:24:43.443 Supported Log Pages Log Page: May Support 00:24:43.443 Commands Supported & Effects Log Page: Not Supported 00:24:43.443 Feature Identifiers & Effects Log Page:May Support 00:24:43.443 NVMe-MI Commands & Effects Log Page: May Support 00:24:43.443 Data Area 4 for Telemetry Log: Not Supported 00:24:43.443 Error Log Page Entries Supported: 128 00:24:43.443 Keep Alive: Not Supported 00:24:43.443 00:24:43.443 NVM Command Set Attributes 00:24:43.443 ========================== 00:24:43.443 Submission Queue Entry Size 00:24:43.443 Max: 1 00:24:43.443 Min: 1 00:24:43.443 Completion Queue Entry Size 00:24:43.443 Max: 1 00:24:43.443 Min: 1 00:24:43.443 Number of Namespaces: 0 00:24:43.443 Compare Command: Not Supported 00:24:43.443 Write Uncorrectable Command: Not Supported 00:24:43.443 Dataset Management Command: Not Supported 00:24:43.443 Write Zeroes Command: Not Supported 00:24:43.443 Set Features Save Field: Not Supported 00:24:43.443 Reservations: Not Supported 00:24:43.443 Timestamp: Not Supported 00:24:43.443 Copy: Not Supported 00:24:43.443 Volatile Write Cache: Not Present 00:24:43.443 Atomic Write Unit (Normal): 1 00:24:43.443 Atomic Write Unit (PFail): 1 00:24:43.443 Atomic Compare & Write Unit: 1 00:24:43.443 Fused Compare & Write: Supported 00:24:43.443 Scatter-Gather List 00:24:43.443 SGL Command Set: Supported 00:24:43.443 SGL Keyed: Supported 00:24:43.443 SGL Bit Bucket Descriptor: Not Supported 00:24:43.443 SGL Metadata Pointer: Not Supported 00:24:43.443 Oversized SGL: Not Supported 00:24:43.443 SGL Metadata Address: Not Supported 00:24:43.443 SGL Offset: Supported 00:24:43.443 Transport SGL Data Block: Not Supported 00:24:43.443 Replay Protected Memory Block: Not Supported 00:24:43.443 00:24:43.443 Firmware Slot Information 00:24:43.443 ========================= 00:24:43.443 Active slot: 0 00:24:43.443 00:24:43.443 00:24:43.443 Error Log 00:24:43.443 ========= 00:24:43.443 00:24:43.443 Active Namespaces 00:24:43.443 ================= 00:24:43.443 Discovery Log Page 00:24:43.443 ================== 00:24:43.443 Generation Counter: 2 00:24:43.443 Number of Records: 2 00:24:43.443 Record Format: 0 00:24:43.443 00:24:43.443 Discovery Log Entry 0 00:24:43.443 ---------------------- 00:24:43.443 Transport Type: 3 (TCP) 00:24:43.443 Address Family: 1 (IPv4) 00:24:43.443 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:43.443 Entry Flags: 00:24:43.443 Duplicate Returned Information: 1 00:24:43.443 Explicit Persistent Connection Support for Discovery: 1 00:24:43.443 Transport Requirements: 00:24:43.443 Secure Channel: Not Required 00:24:43.443 Port ID: 0 (0x0000) 00:24:43.443 Controller ID: 65535 (0xffff) 00:24:43.443 Admin Max SQ Size: 128 00:24:43.443 Transport Service Identifier: 4420 00:24:43.443 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:43.443 Transport Address: 10.0.0.3 00:24:43.443 Discovery Log Entry 1 00:24:43.443 ---------------------- 00:24:43.443 Transport Type: 3 (TCP) 00:24:43.443 Address Family: 1 (IPv4) 00:24:43.443 Subsystem Type: 2 (NVM Subsystem) 00:24:43.443 Entry Flags: 00:24:43.443 Duplicate Returned Information: 0 00:24:43.443 Explicit Persistent Connection Support for Discovery: 0 00:24:43.443 Transport Requirements: 00:24:43.443 Secure Channel: Not Required 00:24:43.443 Port ID: 0 (0x0000) 00:24:43.443 Controller ID: 65535 (0xffff) 00:24:43.443 Admin Max SQ Size: 128 00:24:43.443 Transport Service Identifier: 4420 00:24:43.443 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:43.443 Transport Address: 10.0.0.3 [2024-11-27 06:27:02.751920] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.443 [2024-11-27 06:27:02.751933] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.443 [2024-11-27 06:27:02.751935] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.443 [2024-11-27 06:27:02.751938] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x797240) on tqpair=0x743260 00:24:43.443 [2024-11-27 06:27:02.751999] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:24:43.443 [2024-11-27 06:27:02.752005] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x796c40) on tqpair=0x743260 00:24:43.443 [2024-11-27 06:27:02.752009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.443 [2024-11-27 06:27:02.752014] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x796dc0) on tqpair=0x743260 00:24:43.443 [2024-11-27 06:27:02.752016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.444 [2024-11-27 06:27:02.752019] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x796f40) on tqpair=0x743260 00:24:43.444 [2024-11-27 06:27:02.752022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.444 [2024-11-27 06:27:02.752024] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.444 [2024-11-27 06:27:02.752027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.444 [2024-11-27 06:27:02.752034] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.444 [2024-11-27 06:27:02.752037] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.444 [2024-11-27 06:27:02.752038] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.444 [2024-11-27 06:27:02.752044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.444 [2024-11-27 06:27:02.752056] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.444 [2024-11-27 06:27:02.752088] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.444 [2024-11-27 06:27:02.752092] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.444 [2024-11-27 06:27:02.752094] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.444 [2024-11-27 06:27:02.752096] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.444 [2024-11-27 06:27:02.752100] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.444 [2024-11-27 06:27:02.752102] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.444 [2024-11-27 06:27:02.752104] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.444 [2024-11-27 06:27:02.752108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.444 [2024-11-27 06:27:02.752118] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.444 [2024-11-27 06:27:02.752159] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.444 [2024-11-27 06:27:02.752163] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.444 [2024-11-27 06:27:02.752164] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.444 [2024-11-27 06:27:02.752167] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.444 [2024-11-27 06:27:02.752169] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:24:43.444 [2024-11-27 06:27:02.752172] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:24:43.444 [2024-11-27 06:27:02.752177] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.444 [2024-11-27 06:27:02.752179] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.444 [2024-11-27 06:27:02.752181] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.444 [2024-11-27 06:27:02.752185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.444 [2024-11-27 06:27:02.752193] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.444 [2024-11-27 06:27:02.752223] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.444 [2024-11-27 06:27:02.752227] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.444 [2024-11-27 06:27:02.752229] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.444 [2024-11-27 06:27:02.752231] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.444 [2024-11-27 06:27:02.752237] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.444 [2024-11-27 06:27:02.752239] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.444 [2024-11-27 06:27:02.752240] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.444 [2024-11-27 06:27:02.752244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.444 [2024-11-27 06:27:02.752252] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.444 [2024-11-27 06:27:02.752288] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.444 [2024-11-27 06:27:02.752292] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.444 [2024-11-27 06:27:02.752293] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.444 [2024-11-27 06:27:02.752295] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.444 [2024-11-27 06:27:02.752301] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.444 [2024-11-27 06:27:02.752303] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.444 [2024-11-27 06:27:02.752305] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.444 [2024-11-27 06:27:02.752309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.444 [2024-11-27 06:27:02.752316] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.444 [2024-11-27 06:27:02.752348] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.444 [2024-11-27 06:27:02.752352] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.444 [2024-11-27 06:27:02.752353] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.444 [2024-11-27 06:27:02.752355] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.444 [2024-11-27 06:27:02.752361] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.444 [2024-11-27 06:27:02.752363] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.444 [2024-11-27 06:27:02.752365] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.444 [2024-11-27 06:27:02.752369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.444 [2024-11-27 06:27:02.752377] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.444 [2024-11-27 06:27:02.752410] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.444 [2024-11-27 06:27:02.752414] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.444 [2024-11-27 06:27:02.752416] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.444 [2024-11-27 06:27:02.752418] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.444 [2024-11-27 06:27:02.752423] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.444 [2024-11-27 06:27:02.752425] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.444 [2024-11-27 06:27:02.752427] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.444 [2024-11-27 06:27:02.752431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.444 [2024-11-27 06:27:02.752439] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.444 [2024-11-27 06:27:02.752471] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.444 [2024-11-27 06:27:02.752474] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.444 [2024-11-27 06:27:02.752476] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.444 [2024-11-27 06:27:02.752478] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.444 [2024-11-27 06:27:02.752484] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.444 [2024-11-27 06:27:02.752486] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.444 [2024-11-27 06:27:02.752488] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.444 [2024-11-27 06:27:02.752492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.444 [2024-11-27 06:27:02.752499] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.444 [2024-11-27 06:27:02.752531] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.444 [2024-11-27 06:27:02.752535] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.444 [2024-11-27 06:27:02.752537] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.444 [2024-11-27 06:27:02.752539] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.445 [2024-11-27 06:27:02.752544] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.445 [2024-11-27 06:27:02.752546] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.445 [2024-11-27 06:27:02.752548] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.445 [2024-11-27 06:27:02.752552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.445 [2024-11-27 06:27:02.752560] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.445 [2024-11-27 06:27:02.752601] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.445 [2024-11-27 06:27:02.752605] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.445 [2024-11-27 06:27:02.752607] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.445 [2024-11-27 06:27:02.752609] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.445 [2024-11-27 06:27:02.752614] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.445 [2024-11-27 06:27:02.752616] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.445 [2024-11-27 06:27:02.752618] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.445 [2024-11-27 06:27:02.752622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.445 [2024-11-27 06:27:02.752631] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.445 [2024-11-27 06:27:02.752661] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.445 [2024-11-27 06:27:02.752664] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.445 [2024-11-27 06:27:02.752666] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.445 [2024-11-27 06:27:02.752668] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.445 [2024-11-27 06:27:02.752674] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.445 [2024-11-27 06:27:02.752676] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.445 [2024-11-27 06:27:02.752678] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.445 [2024-11-27 06:27:02.752683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.445 [2024-11-27 06:27:02.752691] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.445 [2024-11-27 06:27:02.752722] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.445 [2024-11-27 06:27:02.752730] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.445 [2024-11-27 06:27:02.752732] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.445 [2024-11-27 06:27:02.752734] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.445 [2024-11-27 06:27:02.752740] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.445 [2024-11-27 06:27:02.752742] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.445 [2024-11-27 06:27:02.752744] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.445 [2024-11-27 06:27:02.752748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.445 [2024-11-27 06:27:02.752756] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.445 [2024-11-27 06:27:02.752788] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.445 [2024-11-27 06:27:02.752792] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.445 [2024-11-27 06:27:02.752793] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.445 [2024-11-27 06:27:02.752795] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.445 [2024-11-27 06:27:02.752801] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.445 [2024-11-27 06:27:02.752803] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.445 [2024-11-27 06:27:02.752805] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.445 [2024-11-27 06:27:02.752809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.445 [2024-11-27 06:27:02.752817] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.445 [2024-11-27 06:27:02.752851] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.445 [2024-11-27 06:27:02.752854] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.445 [2024-11-27 06:27:02.752856] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.445 [2024-11-27 06:27:02.752858] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.445 [2024-11-27 06:27:02.752864] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.445 [2024-11-27 06:27:02.752866] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.445 [2024-11-27 06:27:02.752868] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.445 [2024-11-27 06:27:02.752872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.445 [2024-11-27 06:27:02.752880] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.445 [2024-11-27 06:27:02.752917] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.445 [2024-11-27 06:27:02.752921] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.445 [2024-11-27 06:27:02.752923] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.445 [2024-11-27 06:27:02.752925] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.445 [2024-11-27 06:27:02.752930] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.445 [2024-11-27 06:27:02.752932] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.445 [2024-11-27 06:27:02.752935] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.445 [2024-11-27 06:27:02.752939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.445 [2024-11-27 06:27:02.752948] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.445 [2024-11-27 06:27:02.752977] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.445 [2024-11-27 06:27:02.752981] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.445 [2024-11-27 06:27:02.752983] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.445 [2024-11-27 06:27:02.752985] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.445 [2024-11-27 06:27:02.752990] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.445 [2024-11-27 06:27:02.752992] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.445 [2024-11-27 06:27:02.752994] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.445 [2024-11-27 06:27:02.752998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.445 [2024-11-27 06:27:02.753006] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.445 [2024-11-27 06:27:02.753039] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.445 [2024-11-27 06:27:02.753046] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.445 [2024-11-27 06:27:02.753048] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.445 [2024-11-27 06:27:02.753050] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.445 [2024-11-27 06:27:02.753056] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.445 [2024-11-27 06:27:02.753058] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.445 [2024-11-27 06:27:02.753060] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.445 [2024-11-27 06:27:02.753064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.445 [2024-11-27 06:27:02.753072] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.445 [2024-11-27 06:27:02.753103] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.445 [2024-11-27 06:27:02.753107] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.445 [2024-11-27 06:27:02.753109] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.446 [2024-11-27 06:27:02.753111] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.446 [2024-11-27 06:27:02.753116] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.446 [2024-11-27 06:27:02.753118] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.446 [2024-11-27 06:27:02.753120] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.446 [2024-11-27 06:27:02.753125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.446 [2024-11-27 06:27:02.753133] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.446 [2024-11-27 06:27:02.753166] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.446 [2024-11-27 06:27:02.753170] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.446 [2024-11-27 06:27:02.753172] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.446 [2024-11-27 06:27:02.753174] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.446 [2024-11-27 06:27:02.753179] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.446 [2024-11-27 06:27:02.753181] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.446 [2024-11-27 06:27:02.753183] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.446 [2024-11-27 06:27:02.753187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.446 [2024-11-27 06:27:02.753195] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.446 [2024-11-27 06:27:02.753226] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.446 [2024-11-27 06:27:02.753229] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.446 [2024-11-27 06:27:02.753231] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.446 [2024-11-27 06:27:02.753233] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.446 [2024-11-27 06:27:02.753239] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.446 [2024-11-27 06:27:02.753241] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.446 [2024-11-27 06:27:02.753243] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.446 [2024-11-27 06:27:02.753247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.446 [2024-11-27 06:27:02.753255] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.446 [2024-11-27 06:27:02.753286] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.446 [2024-11-27 06:27:02.753291] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.446 [2024-11-27 06:27:02.753293] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.446 [2024-11-27 06:27:02.753295] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.446 [2024-11-27 06:27:02.753301] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.446 [2024-11-27 06:27:02.753303] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.446 [2024-11-27 06:27:02.753305] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.446 [2024-11-27 06:27:02.753309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.446 [2024-11-27 06:27:02.753317] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.446 [2024-11-27 06:27:02.753349] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.446 [2024-11-27 06:27:02.753354] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.446 [2024-11-27 06:27:02.753355] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.446 [2024-11-27 06:27:02.753357] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.446 [2024-11-27 06:27:02.753363] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.446 [2024-11-27 06:27:02.753365] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.446 [2024-11-27 06:27:02.753367] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.446 [2024-11-27 06:27:02.753371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.446 [2024-11-27 06:27:02.753379] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.446 [2024-11-27 06:27:02.753410] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.446 [2024-11-27 06:27:02.753418] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.446 [2024-11-27 06:27:02.753420] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.446 [2024-11-27 06:27:02.753422] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.446 [2024-11-27 06:27:02.753428] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.446 [2024-11-27 06:27:02.753430] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.446 [2024-11-27 06:27:02.753432] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.446 [2024-11-27 06:27:02.753436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.446 [2024-11-27 06:27:02.753444] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.446 [2024-11-27 06:27:02.753477] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.446 [2024-11-27 06:27:02.753482] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.446 [2024-11-27 06:27:02.753484] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.446 [2024-11-27 06:27:02.753486] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.446 [2024-11-27 06:27:02.753492] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.446 [2024-11-27 06:27:02.753494] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.446 [2024-11-27 06:27:02.753496] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.446 [2024-11-27 06:27:02.753500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.446 [2024-11-27 06:27:02.753508] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.446 [2024-11-27 06:27:02.753538] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.446 [2024-11-27 06:27:02.753543] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.446 [2024-11-27 06:27:02.753545] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.446 [2024-11-27 06:27:02.753547] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.446 [2024-11-27 06:27:02.753553] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.446 [2024-11-27 06:27:02.753555] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.446 [2024-11-27 06:27:02.753557] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.446 [2024-11-27 06:27:02.753561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.446 [2024-11-27 06:27:02.753569] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.446 [2024-11-27 06:27:02.753602] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.446 [2024-11-27 06:27:02.753611] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.446 [2024-11-27 06:27:02.753613] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.446 [2024-11-27 06:27:02.753615] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.446 [2024-11-27 06:27:02.753621] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.446 [2024-11-27 06:27:02.753623] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.446 [2024-11-27 06:27:02.753625] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.446 [2024-11-27 06:27:02.753629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.446 [2024-11-27 06:27:02.753638] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.446 [2024-11-27 06:27:02.753669] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.446 [2024-11-27 06:27:02.753673] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.446 [2024-11-27 06:27:02.753674] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.446 [2024-11-27 06:27:02.753676] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.446 [2024-11-27 06:27:02.753682] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.446 [2024-11-27 06:27:02.753684] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.447 [2024-11-27 06:27:02.753686] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.447 [2024-11-27 06:27:02.753690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.447 [2024-11-27 06:27:02.753698] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.447 [2024-11-27 06:27:02.753730] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.447 [2024-11-27 06:27:02.753734] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.447 [2024-11-27 06:27:02.753735] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.447 [2024-11-27 06:27:02.753737] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.447 [2024-11-27 06:27:02.753743] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.447 [2024-11-27 06:27:02.753745] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.447 [2024-11-27 06:27:02.753747] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.447 [2024-11-27 06:27:02.753751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.447 [2024-11-27 06:27:02.753759] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.447 [2024-11-27 06:27:02.753789] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.447 [2024-11-27 06:27:02.753793] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.447 [2024-11-27 06:27:02.753795] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.447 [2024-11-27 06:27:02.753797] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.447 [2024-11-27 06:27:02.753802] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.447 [2024-11-27 06:27:02.753804] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.447 [2024-11-27 06:27:02.753806] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.447 [2024-11-27 06:27:02.753810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.447 [2024-11-27 06:27:02.753818] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.447 [2024-11-27 06:27:02.753848] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.447 [2024-11-27 06:27:02.753851] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.447 [2024-11-27 06:27:02.753853] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.447 [2024-11-27 06:27:02.753855] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.447 [2024-11-27 06:27:02.753861] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.447 [2024-11-27 06:27:02.753863] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.447 [2024-11-27 06:27:02.753865] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.447 [2024-11-27 06:27:02.753869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.447 [2024-11-27 06:27:02.753876] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.447 [2024-11-27 06:27:02.753913] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.447 [2024-11-27 06:27:02.753917] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.447 [2024-11-27 06:27:02.753919] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.447 [2024-11-27 06:27:02.753921] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.447 [2024-11-27 06:27:02.753928] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.447 [2024-11-27 06:27:02.753930] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.447 [2024-11-27 06:27:02.753932] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.447 [2024-11-27 06:27:02.753936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.447 [2024-11-27 06:27:02.753945] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.447 [2024-11-27 06:27:02.753977] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.447 [2024-11-27 06:27:02.753981] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.447 [2024-11-27 06:27:02.753982] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.447 [2024-11-27 06:27:02.753984] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.447 [2024-11-27 06:27:02.753990] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.447 [2024-11-27 06:27:02.753992] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.447 [2024-11-27 06:27:02.753994] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.447 [2024-11-27 06:27:02.753998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.447 [2024-11-27 06:27:02.754006] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.447 [2024-11-27 06:27:02.754037] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.447 [2024-11-27 06:27:02.754041] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.447 [2024-11-27 06:27:02.754043] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.447 [2024-11-27 06:27:02.754045] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.447 [2024-11-27 06:27:02.754050] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.447 [2024-11-27 06:27:02.754052] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.447 [2024-11-27 06:27:02.754054] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.447 [2024-11-27 06:27:02.754058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.447 [2024-11-27 06:27:02.754066] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.447 [2024-11-27 06:27:02.754096] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.447 [2024-11-27 06:27:02.754100] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.447 [2024-11-27 06:27:02.754101] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.447 [2024-11-27 06:27:02.754103] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.447 [2024-11-27 06:27:02.754109] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.447 [2024-11-27 06:27:02.754111] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.447 [2024-11-27 06:27:02.754113] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.447 [2024-11-27 06:27:02.754117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.447 [2024-11-27 06:27:02.754125] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.447 [2024-11-27 06:27:02.754155] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.447 [2024-11-27 06:27:02.754159] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.447 [2024-11-27 06:27:02.754161] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.447 [2024-11-27 06:27:02.754164] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.447 [2024-11-27 06:27:02.754169] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.447 [2024-11-27 06:27:02.754171] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.447 [2024-11-27 06:27:02.754173] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.447 [2024-11-27 06:27:02.754177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.447 [2024-11-27 06:27:02.754186] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.447 [2024-11-27 06:27:02.754221] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.447 [2024-11-27 06:27:02.754229] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.447 [2024-11-27 06:27:02.754231] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.447 [2024-11-27 06:27:02.754233] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.447 [2024-11-27 06:27:02.754239] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.447 [2024-11-27 06:27:02.754241] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.447 [2024-11-27 06:27:02.754243] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.448 [2024-11-27 06:27:02.754247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.448 [2024-11-27 06:27:02.754255] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.448 [2024-11-27 06:27:02.754288] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.448 [2024-11-27 06:27:02.754292] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.448 [2024-11-27 06:27:02.754293] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.448 [2024-11-27 06:27:02.754296] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.448 [2024-11-27 06:27:02.754301] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.448 [2024-11-27 06:27:02.754303] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.448 [2024-11-27 06:27:02.754305] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.448 [2024-11-27 06:27:02.754309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.448 [2024-11-27 06:27:02.754318] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.448 [2024-11-27 06:27:02.754353] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.448 [2024-11-27 06:27:02.754358] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.448 [2024-11-27 06:27:02.754360] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.448 [2024-11-27 06:27:02.754362] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.448 [2024-11-27 06:27:02.754368] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.448 [2024-11-27 06:27:02.754370] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.448 [2024-11-27 06:27:02.754372] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.448 [2024-11-27 06:27:02.754376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.448 [2024-11-27 06:27:02.754385] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.448 [2024-11-27 06:27:02.754418] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.448 [2024-11-27 06:27:02.754425] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.448 [2024-11-27 06:27:02.754427] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.448 [2024-11-27 06:27:02.754429] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.448 [2024-11-27 06:27:02.754435] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.448 [2024-11-27 06:27:02.754437] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.448 [2024-11-27 06:27:02.754439] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.448 [2024-11-27 06:27:02.754443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.448 [2024-11-27 06:27:02.754452] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.448 [2024-11-27 06:27:02.754487] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.448 [2024-11-27 06:27:02.754494] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.448 [2024-11-27 06:27:02.754496] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.448 [2024-11-27 06:27:02.754498] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.448 [2024-11-27 06:27:02.754504] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.448 [2024-11-27 06:27:02.754506] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.448 [2024-11-27 06:27:02.754508] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.448 [2024-11-27 06:27:02.754512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.448 [2024-11-27 06:27:02.754520] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.448 [2024-11-27 06:27:02.754552] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.448 [2024-11-27 06:27:02.754557] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.448 [2024-11-27 06:27:02.754558] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.448 [2024-11-27 06:27:02.754560] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.448 [2024-11-27 06:27:02.754566] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.448 [2024-11-27 06:27:02.754569] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.448 [2024-11-27 06:27:02.754571] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.448 [2024-11-27 06:27:02.754575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.448 [2024-11-27 06:27:02.754583] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.448 [2024-11-27 06:27:02.754614] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.448 [2024-11-27 06:27:02.754622] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.448 [2024-11-27 06:27:02.754624] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.448 [2024-11-27 06:27:02.754626] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.448 [2024-11-27 06:27:02.754631] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.448 [2024-11-27 06:27:02.754634] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.448 [2024-11-27 06:27:02.754635] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.448 [2024-11-27 06:27:02.754640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.448 [2024-11-27 06:27:02.754648] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.448 [2024-11-27 06:27:02.754678] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.448 [2024-11-27 06:27:02.754681] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.448 [2024-11-27 06:27:02.754683] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.448 [2024-11-27 06:27:02.754686] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.448 [2024-11-27 06:27:02.754692] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.448 [2024-11-27 06:27:02.754694] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.448 [2024-11-27 06:27:02.754695] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.449 [2024-11-27 06:27:02.754700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.449 [2024-11-27 06:27:02.754707] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.449 [2024-11-27 06:27:02.754737] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.449 [2024-11-27 06:27:02.754741] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.449 [2024-11-27 06:27:02.754743] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.449 [2024-11-27 06:27:02.754745] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.449 [2024-11-27 06:27:02.754750] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.449 [2024-11-27 06:27:02.754752] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.449 [2024-11-27 06:27:02.754754] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.449 [2024-11-27 06:27:02.754758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.449 [2024-11-27 06:27:02.754766] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.449 [2024-11-27 06:27:02.754801] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.449 [2024-11-27 06:27:02.754808] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.449 [2024-11-27 06:27:02.754810] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.449 [2024-11-27 06:27:02.754812] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.449 [2024-11-27 06:27:02.754818] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.449 [2024-11-27 06:27:02.754820] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.449 [2024-11-27 06:27:02.754822] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.449 [2024-11-27 06:27:02.754826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.449 [2024-11-27 06:27:02.754834] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.449 [2024-11-27 06:27:02.754868] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.449 [2024-11-27 06:27:02.754872] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.449 [2024-11-27 06:27:02.754874] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.449 [2024-11-27 06:27:02.754876] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.449 [2024-11-27 06:27:02.754882] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.449 [2024-11-27 06:27:02.754884] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.449 [2024-11-27 06:27:02.754886] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.449 [2024-11-27 06:27:02.754890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.449 [2024-11-27 06:27:02.754899] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.449 [2024-11-27 06:27:02.754945] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.449 [2024-11-27 06:27:02.754952] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.449 [2024-11-27 06:27:02.754954] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.449 [2024-11-27 06:27:02.754956] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.449 [2024-11-27 06:27:02.754962] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.449 [2024-11-27 06:27:02.754964] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.449 [2024-11-27 06:27:02.754966] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.449 [2024-11-27 06:27:02.754970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.449 [2024-11-27 06:27:02.754979] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.449 [2024-11-27 06:27:02.755014] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.449 [2024-11-27 06:27:02.755018] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.449 [2024-11-27 06:27:02.755020] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.449 [2024-11-27 06:27:02.755022] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.449 [2024-11-27 06:27:02.755028] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.449 [2024-11-27 06:27:02.755030] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.449 [2024-11-27 06:27:02.755032] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.449 [2024-11-27 06:27:02.755036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.449 [2024-11-27 06:27:02.755044] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.449 [2024-11-27 06:27:02.755076] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.449 [2024-11-27 06:27:02.755080] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.449 [2024-11-27 06:27:02.755082] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.449 [2024-11-27 06:27:02.755084] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.449 [2024-11-27 06:27:02.755090] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.449 [2024-11-27 06:27:02.755092] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.449 [2024-11-27 06:27:02.755094] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.449 [2024-11-27 06:27:02.755098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.449 [2024-11-27 06:27:02.755107] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.449 [2024-11-27 06:27:02.755136] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.449 [2024-11-27 06:27:02.755141] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.449 [2024-11-27 06:27:02.755143] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.449 [2024-11-27 06:27:02.755145] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.449 [2024-11-27 06:27:02.755150] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.449 [2024-11-27 06:27:02.755153] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.449 [2024-11-27 06:27:02.755155] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.449 [2024-11-27 06:27:02.755159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.449 [2024-11-27 06:27:02.755167] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.449 [2024-11-27 06:27:02.755200] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.449 [2024-11-27 06:27:02.755207] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.449 [2024-11-27 06:27:02.755209] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.449 [2024-11-27 06:27:02.755211] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.449 [2024-11-27 06:27:02.755217] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.449 [2024-11-27 06:27:02.755219] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.449 [2024-11-27 06:27:02.755221] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.449 [2024-11-27 06:27:02.755225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.449 [2024-11-27 06:27:02.755234] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.449 [2024-11-27 06:27:02.755265] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.449 [2024-11-27 06:27:02.755269] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.449 [2024-11-27 06:27:02.755271] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.449 [2024-11-27 06:27:02.755273] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.449 [2024-11-27 06:27:02.755278] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.449 [2024-11-27 06:27:02.755280] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.449 [2024-11-27 06:27:02.755282] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.449 [2024-11-27 06:27:02.755286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.449 [2024-11-27 06:27:02.755294] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.449 [2024-11-27 06:27:02.755328] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.449 [2024-11-27 06:27:02.755331] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.449 [2024-11-27 06:27:02.755333] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.449 [2024-11-27 06:27:02.755335] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.450 [2024-11-27 06:27:02.755341] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.450 [2024-11-27 06:27:02.755343] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.450 [2024-11-27 06:27:02.755345] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.450 [2024-11-27 06:27:02.755349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.450 [2024-11-27 06:27:02.755357] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.450 [2024-11-27 06:27:02.755387] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.450 [2024-11-27 06:27:02.755394] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.450 [2024-11-27 06:27:02.755396] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.450 [2024-11-27 06:27:02.755398] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.450 [2024-11-27 06:27:02.755404] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.450 [2024-11-27 06:27:02.755406] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.450 [2024-11-27 06:27:02.755408] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.450 [2024-11-27 06:27:02.755412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.450 [2024-11-27 06:27:02.755420] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.450 [2024-11-27 06:27:02.755451] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.450 [2024-11-27 06:27:02.755458] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.450 [2024-11-27 06:27:02.755460] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.450 [2024-11-27 06:27:02.755462] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.450 [2024-11-27 06:27:02.755468] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.450 [2024-11-27 06:27:02.755470] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.450 [2024-11-27 06:27:02.755472] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.450 [2024-11-27 06:27:02.755476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.450 [2024-11-27 06:27:02.755484] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.450 [2024-11-27 06:27:02.755514] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.450 [2024-11-27 06:27:02.755519] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.450 [2024-11-27 06:27:02.755521] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.450 [2024-11-27 06:27:02.755523] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.450 [2024-11-27 06:27:02.755529] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.450 [2024-11-27 06:27:02.755531] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.450 [2024-11-27 06:27:02.755533] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.450 [2024-11-27 06:27:02.755537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.450 [2024-11-27 06:27:02.755545] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.450 [2024-11-27 06:27:02.755576] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.450 [2024-11-27 06:27:02.755580] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.450 [2024-11-27 06:27:02.755581] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.450 [2024-11-27 06:27:02.755583] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.450 [2024-11-27 06:27:02.755589] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.450 [2024-11-27 06:27:02.755591] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.450 [2024-11-27 06:27:02.755593] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.450 [2024-11-27 06:27:02.755597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.450 [2024-11-27 06:27:02.755605] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.450 [2024-11-27 06:27:02.755639] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.450 [2024-11-27 06:27:02.755643] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.450 [2024-11-27 06:27:02.755645] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.450 [2024-11-27 06:27:02.755647] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.450 [2024-11-27 06:27:02.755652] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.450 [2024-11-27 06:27:02.755654] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.450 [2024-11-27 06:27:02.755656] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.450 [2024-11-27 06:27:02.755660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.450 [2024-11-27 06:27:02.755669] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.450 [2024-11-27 06:27:02.755699] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.450 [2024-11-27 06:27:02.755703] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.450 [2024-11-27 06:27:02.755705] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.450 [2024-11-27 06:27:02.755707] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.450 [2024-11-27 06:27:02.755712] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.450 [2024-11-27 06:27:02.755714] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.450 [2024-11-27 06:27:02.755716] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.450 [2024-11-27 06:27:02.755720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.450 [2024-11-27 06:27:02.755728] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.450 [2024-11-27 06:27:02.755760] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.450 [2024-11-27 06:27:02.755763] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.450 [2024-11-27 06:27:02.755765] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.450 [2024-11-27 06:27:02.755768] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.450 [2024-11-27 06:27:02.755773] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.450 [2024-11-27 06:27:02.755775] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.450 [2024-11-27 06:27:02.755777] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.450 [2024-11-27 06:27:02.755781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.450 [2024-11-27 06:27:02.755790] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.450 [2024-11-27 06:27:02.755821] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.450 [2024-11-27 06:27:02.755825] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.450 [2024-11-27 06:27:02.755827] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.450 [2024-11-27 06:27:02.755829] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.450 [2024-11-27 06:27:02.755834] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.450 [2024-11-27 06:27:02.755836] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.450 [2024-11-27 06:27:02.755838] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.450 [2024-11-27 06:27:02.755842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.450 [2024-11-27 06:27:02.755850] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.450 [2024-11-27 06:27:02.755882] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.450 [2024-11-27 06:27:02.755885] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.450 [2024-11-27 06:27:02.755887] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.450 [2024-11-27 06:27:02.755890] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.450 [2024-11-27 06:27:02.755895] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.451 [2024-11-27 06:27:02.755898] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.451 [2024-11-27 06:27:02.755899] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x743260) 00:24:43.451 [2024-11-27 06:27:02.759910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.451 [2024-11-27 06:27:02.759929] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7970c0, cid 3, qid 0 00:24:43.451 [2024-11-27 06:27:02.759966] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.451 [2024-11-27 06:27:02.759970] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.451 [2024-11-27 06:27:02.759971] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.451 [2024-11-27 06:27:02.759974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7970c0) on tqpair=0x743260 00:24:43.451 [2024-11-27 06:27:02.759979] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:24:43.451 00:24:43.451 06:27:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:43.451 [2024-11-27 06:27:02.791538] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:24:43.451 [2024-11-27 06:27:02.791574] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105485 ] 00:24:43.451 [2024-11-27 06:27:02.904322] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:43.451 [2024-11-27 06:27:02.931699] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:24:43.451 [2024-11-27 06:27:02.931742] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:43.451 [2024-11-27 06:27:02.931745] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:43.451 [2024-11-27 06:27:02.931757] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:43.451 [2024-11-27 06:27:02.931764] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:43.451 [2024-11-27 06:27:02.935952] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:24:43.451 [2024-11-27 06:27:02.935990] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x9e9260 0 00:24:43.451 [2024-11-27 06:27:02.936047] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:43.451 [2024-11-27 06:27:02.936051] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:43.451 [2024-11-27 06:27:02.936054] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:43.451 [2024-11-27 06:27:02.936056] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:43.451 [2024-11-27 06:27:02.936074] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.451 [2024-11-27 06:27:02.936077] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.451 [2024-11-27 06:27:02.936079] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9e9260) 00:24:43.451 [2024-11-27 06:27:02.936087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:43.451 [2024-11-27 06:27:02.936099] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3cc40, cid 0, qid 0 00:24:43.451 [2024-11-27 06:27:02.943915] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.451 [2024-11-27 06:27:02.943926] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.451 [2024-11-27 06:27:02.943928] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.451 [2024-11-27 06:27:02.943931] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3cc40) on tqpair=0x9e9260 00:24:43.451 [2024-11-27 06:27:02.943937] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:43.451 [2024-11-27 06:27:02.943941] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:24:43.451 [2024-11-27 06:27:02.943945] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:24:43.451 [2024-11-27 06:27:02.943954] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.451 [2024-11-27 06:27:02.943956] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.451 [2024-11-27 06:27:02.943959] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9e9260) 00:24:43.451 [2024-11-27 06:27:02.943964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.451 [2024-11-27 06:27:02.943977] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3cc40, cid 0, qid 0 00:24:43.451 [2024-11-27 06:27:02.944027] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.451 [2024-11-27 06:27:02.944031] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.451 [2024-11-27 06:27:02.944033] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.451 [2024-11-27 06:27:02.944035] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3cc40) on tqpair=0x9e9260 00:24:43.451 [2024-11-27 06:27:02.944038] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:24:43.451 [2024-11-27 06:27:02.944042] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:24:43.451 [2024-11-27 06:27:02.944046] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.451 [2024-11-27 06:27:02.944048] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.451 [2024-11-27 06:27:02.944049] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9e9260) 00:24:43.451 [2024-11-27 06:27:02.944054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.451 [2024-11-27 06:27:02.944062] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3cc40, cid 0, qid 0 00:24:43.451 [2024-11-27 06:27:02.944105] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.451 [2024-11-27 06:27:02.944109] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.451 [2024-11-27 06:27:02.944111] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.451 [2024-11-27 06:27:02.944113] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3cc40) on tqpair=0x9e9260 00:24:43.451 [2024-11-27 06:27:02.944116] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:24:43.451 [2024-11-27 06:27:02.944120] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:43.451 [2024-11-27 06:27:02.944124] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.451 [2024-11-27 06:27:02.944126] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.451 [2024-11-27 06:27:02.944128] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9e9260) 00:24:43.451 [2024-11-27 06:27:02.944132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.451 [2024-11-27 06:27:02.944140] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3cc40, cid 0, qid 0 00:24:43.451 [2024-11-27 06:27:02.944182] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.451 [2024-11-27 06:27:02.944186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.451 [2024-11-27 06:27:02.944188] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.451 [2024-11-27 06:27:02.944190] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3cc40) on tqpair=0x9e9260 00:24:43.451 [2024-11-27 06:27:02.944194] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:43.451 [2024-11-27 06:27:02.944199] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.451 [2024-11-27 06:27:02.944201] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.451 [2024-11-27 06:27:02.944203] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9e9260) 00:24:43.451 [2024-11-27 06:27:02.944207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.451 [2024-11-27 06:27:02.944215] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3cc40, cid 0, qid 0 00:24:43.451 [2024-11-27 06:27:02.944258] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.451 [2024-11-27 06:27:02.944262] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.452 [2024-11-27 06:27:02.944264] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.452 [2024-11-27 06:27:02.944266] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3cc40) on tqpair=0x9e9260 00:24:43.452 [2024-11-27 06:27:02.944269] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:43.452 [2024-11-27 06:27:02.944271] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:43.452 [2024-11-27 06:27:02.944275] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:43.452 [2024-11-27 06:27:02.944378] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:24:43.452 [2024-11-27 06:27:02.944386] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:43.452 [2024-11-27 06:27:02.944391] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.452 [2024-11-27 06:27:02.944393] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.452 [2024-11-27 06:27:02.944395] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9e9260) 00:24:43.452 [2024-11-27 06:27:02.944400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.452 [2024-11-27 06:27:02.944408] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3cc40, cid 0, qid 0 00:24:43.452 [2024-11-27 06:27:02.944452] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.452 [2024-11-27 06:27:02.944459] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.452 [2024-11-27 06:27:02.944461] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.452 [2024-11-27 06:27:02.944463] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3cc40) on tqpair=0x9e9260 00:24:43.452 [2024-11-27 06:27:02.944466] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:43.452 [2024-11-27 06:27:02.944471] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.452 [2024-11-27 06:27:02.944473] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.452 [2024-11-27 06:27:02.944475] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9e9260) 00:24:43.452 [2024-11-27 06:27:02.944479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.452 [2024-11-27 06:27:02.944488] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3cc40, cid 0, qid 0 00:24:43.452 [2024-11-27 06:27:02.944529] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.452 [2024-11-27 06:27:02.944533] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.452 [2024-11-27 06:27:02.944535] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.452 [2024-11-27 06:27:02.944537] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3cc40) on tqpair=0x9e9260 00:24:43.452 [2024-11-27 06:27:02.944540] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:43.452 [2024-11-27 06:27:02.944543] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:43.452 [2024-11-27 06:27:02.944547] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:24:43.452 [2024-11-27 06:27:02.944552] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:43.452 [2024-11-27 06:27:02.944557] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.452 [2024-11-27 06:27:02.944559] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9e9260) 00:24:43.452 [2024-11-27 06:27:02.944564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.452 [2024-11-27 06:27:02.944572] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3cc40, cid 0, qid 0 00:24:43.452 [2024-11-27 06:27:02.944658] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:43.452 [2024-11-27 06:27:02.944667] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:43.452 [2024-11-27 06:27:02.944669] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:43.452 [2024-11-27 06:27:02.944671] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9e9260): datao=0, datal=4096, cccid=0 00:24:43.452 [2024-11-27 06:27:02.944674] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa3cc40) on tqpair(0x9e9260): expected_datao=0, payload_size=4096 00:24:43.452 [2024-11-27 06:27:02.944676] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.452 [2024-11-27 06:27:02.944681] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:43.452 [2024-11-27 06:27:02.944684] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:43.452 [2024-11-27 06:27:02.944689] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.452 [2024-11-27 06:27:02.944692] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.452 [2024-11-27 06:27:02.944694] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.452 [2024-11-27 06:27:02.944696] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3cc40) on tqpair=0x9e9260 00:24:43.452 [2024-11-27 06:27:02.944701] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:24:43.452 [2024-11-27 06:27:02.944703] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:24:43.452 [2024-11-27 06:27:02.944705] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:24:43.452 [2024-11-27 06:27:02.944712] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:24:43.452 [2024-11-27 06:27:02.944714] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:24:43.452 [2024-11-27 06:27:02.944717] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:24:43.452 [2024-11-27 06:27:02.944722] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:43.452 [2024-11-27 06:27:02.944725] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.452 [2024-11-27 06:27:02.944727] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.452 [2024-11-27 06:27:02.944729] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9e9260) 00:24:43.452 [2024-11-27 06:27:02.944734] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:43.452 [2024-11-27 06:27:02.944743] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3cc40, cid 0, qid 0 00:24:43.452 [2024-11-27 06:27:02.944786] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.452 [2024-11-27 06:27:02.944790] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.452 [2024-11-27 06:27:02.944791] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.452 [2024-11-27 06:27:02.944793] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3cc40) on tqpair=0x9e9260 00:24:43.452 [2024-11-27 06:27:02.944798] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.452 [2024-11-27 06:27:02.944800] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.452 [2024-11-27 06:27:02.944802] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9e9260) 00:24:43.452 [2024-11-27 06:27:02.944806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.452 [2024-11-27 06:27:02.944809] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.452 [2024-11-27 06:27:02.944811] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.452 [2024-11-27 06:27:02.944813] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x9e9260) 00:24:43.452 [2024-11-27 06:27:02.944817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.452 [2024-11-27 06:27:02.944820] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.452 [2024-11-27 06:27:02.944822] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.452 [2024-11-27 06:27:02.944824] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x9e9260) 00:24:43.452 [2024-11-27 06:27:02.944828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.452 [2024-11-27 06:27:02.944832] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.453 [2024-11-27 06:27:02.944834] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.453 [2024-11-27 06:27:02.944836] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9e9260) 00:24:43.453 [2024-11-27 06:27:02.944839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.453 [2024-11-27 06:27:02.944842] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:43.453 [2024-11-27 06:27:02.944846] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:43.453 [2024-11-27 06:27:02.944849] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.453 [2024-11-27 06:27:02.944851] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9e9260) 00:24:43.453 [2024-11-27 06:27:02.944856] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.453 [2024-11-27 06:27:02.944867] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3cc40, cid 0, qid 0 00:24:43.453 [2024-11-27 06:27:02.944870] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3cdc0, cid 1, qid 0 00:24:43.453 [2024-11-27 06:27:02.944873] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3cf40, cid 2, qid 0 00:24:43.453 [2024-11-27 06:27:02.944875] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d0c0, cid 3, qid 0 00:24:43.453 [2024-11-27 06:27:02.944878] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d240, cid 4, qid 0 00:24:43.453 [2024-11-27 06:27:02.944977] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.453 [2024-11-27 06:27:02.944981] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.453 [2024-11-27 06:27:02.944983] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.453 [2024-11-27 06:27:02.944985] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d240) on tqpair=0x9e9260 00:24:43.453 [2024-11-27 06:27:02.944988] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:24:43.453 [2024-11-27 06:27:02.944991] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:43.453 [2024-11-27 06:27:02.944995] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:24:43.453 [2024-11-27 06:27:02.944999] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:43.453 [2024-11-27 06:27:02.945002] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.453 [2024-11-27 06:27:02.945004] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.453 [2024-11-27 06:27:02.945006] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9e9260) 00:24:43.453 [2024-11-27 06:27:02.945010] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:43.453 [2024-11-27 06:27:02.945019] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d240, cid 4, qid 0 00:24:43.453 [2024-11-27 06:27:02.945063] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.453 [2024-11-27 06:27:02.945066] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.453 [2024-11-27 06:27:02.945068] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.453 [2024-11-27 06:27:02.945070] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d240) on tqpair=0x9e9260 00:24:43.453 [2024-11-27 06:27:02.945116] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:24:43.453 [2024-11-27 06:27:02.945121] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:43.453 [2024-11-27 06:27:02.945126] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.453 [2024-11-27 06:27:02.945127] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9e9260) 00:24:43.453 [2024-11-27 06:27:02.945132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.453 [2024-11-27 06:27:02.945140] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d240, cid 4, qid 0 00:24:43.453 [2024-11-27 06:27:02.945190] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:43.453 [2024-11-27 06:27:02.945194] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:43.453 [2024-11-27 06:27:02.945195] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:43.453 [2024-11-27 06:27:02.945197] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9e9260): datao=0, datal=4096, cccid=4 00:24:43.453 [2024-11-27 06:27:02.945200] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa3d240) on tqpair(0x9e9260): expected_datao=0, payload_size=4096 00:24:43.453 [2024-11-27 06:27:02.945202] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.453 [2024-11-27 06:27:02.945206] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:43.453 [2024-11-27 06:27:02.945208] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:43.453 [2024-11-27 06:27:02.945213] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.453 [2024-11-27 06:27:02.945216] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.453 [2024-11-27 06:27:02.945218] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.453 [2024-11-27 06:27:02.945220] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d240) on tqpair=0x9e9260 00:24:43.453 [2024-11-27 06:27:02.945225] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:24:43.453 [2024-11-27 06:27:02.945230] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:24:43.453 [2024-11-27 06:27:02.945236] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:24:43.453 [2024-11-27 06:27:02.945240] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.453 [2024-11-27 06:27:02.945242] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9e9260) 00:24:43.453 [2024-11-27 06:27:02.945246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.453 [2024-11-27 06:27:02.945254] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d240, cid 4, qid 0 00:24:43.453 [2024-11-27 06:27:02.945313] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:43.453 [2024-11-27 06:27:02.945317] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:43.453 [2024-11-27 06:27:02.945319] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:43.453 [2024-11-27 06:27:02.945321] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9e9260): datao=0, datal=4096, cccid=4 00:24:43.453 [2024-11-27 06:27:02.945323] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa3d240) on tqpair(0x9e9260): expected_datao=0, payload_size=4096 00:24:43.453 [2024-11-27 06:27:02.945325] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.453 [2024-11-27 06:27:02.945329] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:43.453 [2024-11-27 06:27:02.945331] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:43.453 [2024-11-27 06:27:02.945336] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.453 [2024-11-27 06:27:02.945339] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.453 [2024-11-27 06:27:02.945341] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.453 [2024-11-27 06:27:02.945343] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d240) on tqpair=0x9e9260 00:24:43.453 [2024-11-27 06:27:02.945351] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:43.453 [2024-11-27 06:27:02.945356] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:43.453 [2024-11-27 06:27:02.945360] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.453 [2024-11-27 06:27:02.945362] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9e9260) 00:24:43.453 [2024-11-27 06:27:02.945366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.453 [2024-11-27 06:27:02.945375] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d240, cid 4, qid 0 00:24:43.453 [2024-11-27 06:27:02.945422] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:43.453 [2024-11-27 06:27:02.945426] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:43.453 [2024-11-27 06:27:02.945428] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:43.453 [2024-11-27 06:27:02.945430] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9e9260): datao=0, datal=4096, cccid=4 00:24:43.453 [2024-11-27 06:27:02.945432] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa3d240) on tqpair(0x9e9260): expected_datao=0, payload_size=4096 00:24:43.453 [2024-11-27 06:27:02.945434] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.454 [2024-11-27 06:27:02.945438] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:43.454 [2024-11-27 06:27:02.945440] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:43.454 [2024-11-27 06:27:02.945444] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.454 [2024-11-27 06:27:02.945448] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.454 [2024-11-27 06:27:02.945450] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.454 [2024-11-27 06:27:02.945452] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d240) on tqpair=0x9e9260 00:24:43.454 [2024-11-27 06:27:02.945456] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:43.454 [2024-11-27 06:27:02.945460] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:24:43.454 [2024-11-27 06:27:02.945465] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:24:43.454 [2024-11-27 06:27:02.945468] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:43.454 [2024-11-27 06:27:02.945471] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:43.454 [2024-11-27 06:27:02.945474] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:24:43.454 [2024-11-27 06:27:02.945477] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:24:43.454 [2024-11-27 06:27:02.945480] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:24:43.454 [2024-11-27 06:27:02.945482] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:24:43.454 [2024-11-27 06:27:02.945492] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.454 [2024-11-27 06:27:02.945494] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9e9260) 00:24:43.454 [2024-11-27 06:27:02.945498] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.454 [2024-11-27 06:27:02.945501] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.454 [2024-11-27 06:27:02.945503] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.454 [2024-11-27 06:27:02.945505] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9e9260) 00:24:43.454 [2024-11-27 06:27:02.945509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.454 [2024-11-27 06:27:02.945520] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d240, cid 4, qid 0 00:24:43.454 [2024-11-27 06:27:02.945523] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d3c0, cid 5, qid 0 00:24:43.454 [2024-11-27 06:27:02.945586] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.454 [2024-11-27 06:27:02.945594] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.454 [2024-11-27 06:27:02.945595] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.454 [2024-11-27 06:27:02.945598] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d240) on tqpair=0x9e9260 00:24:43.454 [2024-11-27 06:27:02.945602] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.454 [2024-11-27 06:27:02.945606] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.454 [2024-11-27 06:27:02.945608] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.454 [2024-11-27 06:27:02.945610] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d3c0) on tqpair=0x9e9260 00:24:43.454 [2024-11-27 06:27:02.945615] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.454 [2024-11-27 06:27:02.945617] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9e9260) 00:24:43.454 [2024-11-27 06:27:02.945621] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.454 [2024-11-27 06:27:02.945630] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d3c0, cid 5, qid 0 00:24:43.454 [2024-11-27 06:27:02.945676] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.454 [2024-11-27 06:27:02.945680] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.454 [2024-11-27 06:27:02.945681] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.454 [2024-11-27 06:27:02.945683] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d3c0) on tqpair=0x9e9260 00:24:43.454 [2024-11-27 06:27:02.945689] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.454 [2024-11-27 06:27:02.945691] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9e9260) 00:24:43.454 [2024-11-27 06:27:02.945694] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.454 [2024-11-27 06:27:02.945702] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d3c0, cid 5, qid 0 00:24:43.454 [2024-11-27 06:27:02.945744] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.454 [2024-11-27 06:27:02.945748] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.454 [2024-11-27 06:27:02.945749] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.454 [2024-11-27 06:27:02.945751] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d3c0) on tqpair=0x9e9260 00:24:43.454 [2024-11-27 06:27:02.945757] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.454 [2024-11-27 06:27:02.945758] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9e9260) 00:24:43.454 [2024-11-27 06:27:02.945762] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.454 [2024-11-27 06:27:02.945770] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d3c0, cid 5, qid 0 00:24:43.454 [2024-11-27 06:27:02.945816] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.454 [2024-11-27 06:27:02.945820] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.454 [2024-11-27 06:27:02.945822] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.454 [2024-11-27 06:27:02.945824] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d3c0) on tqpair=0x9e9260 00:24:43.454 [2024-11-27 06:27:02.945832] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.454 [2024-11-27 06:27:02.945834] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9e9260) 00:24:43.454 [2024-11-27 06:27:02.945838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.454 [2024-11-27 06:27:02.945842] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.454 [2024-11-27 06:27:02.945844] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9e9260) 00:24:43.454 [2024-11-27 06:27:02.945848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.455 [2024-11-27 06:27:02.945852] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.455 [2024-11-27 06:27:02.945853] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x9e9260) 00:24:43.455 [2024-11-27 06:27:02.945857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.455 [2024-11-27 06:27:02.945861] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.455 [2024-11-27 06:27:02.945863] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x9e9260) 00:24:43.455 [2024-11-27 06:27:02.945867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.455 [2024-11-27 06:27:02.945876] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d3c0, cid 5, qid 0 00:24:43.455 [2024-11-27 06:27:02.945879] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d240, cid 4, qid 0 00:24:43.455 [2024-11-27 06:27:02.945882] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d540, cid 6, qid 0 00:24:43.455 [2024-11-27 06:27:02.945885] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d6c0, cid 7, qid 0 00:24:43.455 [2024-11-27 06:27:02.946025] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:43.455 [2024-11-27 06:27:02.946033] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:43.455 [2024-11-27 06:27:02.946035] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:43.455 [2024-11-27 06:27:02.946037] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9e9260): datao=0, datal=8192, cccid=5 00:24:43.455 [2024-11-27 06:27:02.946039] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa3d3c0) on tqpair(0x9e9260): expected_datao=0, payload_size=8192 00:24:43.455 [2024-11-27 06:27:02.946042] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.455 [2024-11-27 06:27:02.946060] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:43.455 [2024-11-27 06:27:02.946063] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:43.455 [2024-11-27 06:27:02.946066] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:43.455 [2024-11-27 06:27:02.946070] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:43.455 [2024-11-27 06:27:02.946072] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:43.455 [2024-11-27 06:27:02.946074] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9e9260): datao=0, datal=512, cccid=4 00:24:43.455 [2024-11-27 06:27:02.946076] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa3d240) on tqpair(0x9e9260): expected_datao=0, payload_size=512 00:24:43.455 [2024-11-27 06:27:02.946078] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.455 [2024-11-27 06:27:02.946082] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:43.455 [2024-11-27 06:27:02.946084] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:43.455 [2024-11-27 06:27:02.946088] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:43.455 [2024-11-27 06:27:02.946091] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:43.455 [2024-11-27 06:27:02.946093] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:43.455 [2024-11-27 06:27:02.946095] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9e9260): datao=0, datal=512, cccid=6 00:24:43.455 [2024-11-27 06:27:02.946097] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa3d540) on tqpair(0x9e9260): expected_datao=0, payload_size=512 00:24:43.455 [2024-11-27 06:27:02.946099] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.455 [2024-11-27 06:27:02.946103] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:43.455 [2024-11-27 06:27:02.946105] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:43.455 [2024-11-27 06:27:02.946108] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:43.455 [2024-11-27 06:27:02.946111] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:43.455 [2024-11-27 06:27:02.946113] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:43.455 [2024-11-27 06:27:02.946115] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9e9260): datao=0, datal=4096, cccid=7 00:24:43.455 [2024-11-27 06:27:02.946117] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa3d6c0) on tqpair(0x9e9260): expected_datao=0, payload_size=4096 00:24:43.455 [2024-11-27 06:27:02.946119] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.455 [2024-11-27 06:27:02.946123] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:43.455 [2024-11-27 06:27:02.946125] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:43.455 [2024-11-27 06:27:02.946130] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.455 [2024-11-27 06:27:02.946134] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.455 [2024-11-27 06:27:02.946135] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.455 [2024-11-27 06:27:02.946137] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d3c0) on tqpair=0x9e9260 00:24:43.455 [2024-11-27 06:27:02.946146] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.455 [2024-11-27 06:27:02.946149] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.455 [2024-11-27 06:27:02.946151] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.455 [2024-11-27 06:27:02.946153] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d240) on tqpair=0x9e9260 00:24:43.455 [2024-11-27 06:27:02.946160] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.455 ===================================================== 00:24:43.455 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:24:43.455 ===================================================== 00:24:43.455 Controller Capabilities/Features 00:24:43.455 ================================ 00:24:43.455 Vendor ID: 8086 00:24:43.455 Subsystem Vendor ID: 8086 00:24:43.455 Serial Number: SPDK00000000000001 00:24:43.455 Model Number: SPDK bdev Controller 00:24:43.455 Firmware Version: 25.01 00:24:43.455 Recommended Arb Burst: 6 00:24:43.455 IEEE OUI Identifier: e4 d2 5c 00:24:43.455 Multi-path I/O 00:24:43.455 May have multiple subsystem ports: Yes 00:24:43.455 May have multiple controllers: Yes 00:24:43.455 Associated with SR-IOV VF: No 00:24:43.455 Max Data Transfer Size: 131072 00:24:43.455 Max Number of Namespaces: 32 00:24:43.455 Max Number of I/O Queues: 127 00:24:43.455 NVMe Specification Version (VS): 1.3 00:24:43.455 NVMe Specification Version (Identify): 1.3 00:24:43.455 Maximum Queue Entries: 128 00:24:43.455 Contiguous Queues Required: Yes 00:24:43.455 Arbitration Mechanisms Supported 00:24:43.455 Weighted Round Robin: Not Supported 00:24:43.455 Vendor Specific: Not Supported 00:24:43.455 Reset Timeout: 15000 ms 00:24:43.455 Doorbell Stride: 4 bytes 00:24:43.455 NVM Subsystem Reset: Not Supported 00:24:43.455 Command Sets Supported 00:24:43.455 NVM Command Set: Supported 00:24:43.455 Boot Partition: Not Supported 00:24:43.455 Memory Page Size Minimum: 4096 bytes 00:24:43.455 Memory Page Size Maximum: 4096 bytes 00:24:43.455 Persistent Memory Region: Not Supported 00:24:43.455 Optional Asynchronous Events Supported 00:24:43.455 Namespace Attribute Notices: Supported 00:24:43.455 Firmware Activation Notices: Not Supported 00:24:43.455 ANA Change Notices: Not Supported 00:24:43.455 PLE Aggregate Log Change Notices: Not Supported 00:24:43.455 LBA Status Info Alert Notices: Not Supported 00:24:43.455 EGE Aggregate Log Change Notices: Not Supported 00:24:43.455 Normal NVM Subsystem Shutdown event: Not Supported 00:24:43.455 Zone Descriptor Change Notices: Not Supported 00:24:43.455 Discovery Log Change Notices: Not Supported 00:24:43.455 Controller Attributes 00:24:43.455 128-bit Host Identifier: Supported 00:24:43.455 Non-Operational Permissive Mode: Not Supported 00:24:43.455 NVM Sets: Not Supported 00:24:43.455 Read Recovery Levels: Not Supported 00:24:43.455 Endurance Groups: Not Supported 00:24:43.455 Predictable Latency Mode: Not Supported 00:24:43.456 Traffic Based Keep ALive: Not Supported 00:24:43.456 Namespace Granularity: Not Supported 00:24:43.456 SQ Associations: Not Supported 00:24:43.456 UUID List: Not Supported 00:24:43.456 Multi-Domain Subsystem: Not Supported 00:24:43.456 Fixed Capacity Management: Not Supported 00:24:43.456 Variable Capacity Management: Not Supported 00:24:43.456 Delete Endurance Group: Not Supported 00:24:43.456 Delete NVM Set: Not Supported 00:24:43.456 Extended LBA Formats Supported: Not Supported 00:24:43.456 Flexible Data Placement Supported: Not Supported 00:24:43.456 00:24:43.456 Controller Memory Buffer Support 00:24:43.456 ================================ 00:24:43.456 Supported: No 00:24:43.456 00:24:43.456 Persistent Memory Region Support 00:24:43.456 ================================ 00:24:43.456 Supported: No 00:24:43.456 00:24:43.456 Admin Command Set Attributes 00:24:43.456 ============================ 00:24:43.456 Security Send/Receive: Not Supported 00:24:43.456 Format NVM: Not Supported 00:24:43.456 Firmware Activate/Download: Not Supported 00:24:43.456 Namespace Management: Not Supported 00:24:43.456 Device Self-Test: Not Supported 00:24:43.456 Directives: Not Supported 00:24:43.456 NVMe-MI: Not Supported 00:24:43.456 Virtualization Management: Not Supported 00:24:43.456 Doorbell Buffer Config: Not Supported 00:24:43.456 Get LBA Status Capability: Not Supported 00:24:43.456 Command & Feature Lockdown Capability: Not Supported 00:24:43.456 Abort Command Limit: 4 00:24:43.456 Async Event Request Limit: 4 00:24:43.456 Number of Firmware Slots: N/A 00:24:43.456 Firmware Slot 1 Read-Only: N/A 00:24:43.456 Firmware Activation Without Reset: N/A 00:24:43.456 Multiple Update Detection Support: N/A 00:24:43.456 Firmware Update Granularity: No Information Provided 00:24:43.456 Per-Namespace SMART Log: No 00:24:43.456 Asymmetric Namespace Access Log Page: Not Supported 00:24:43.456 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:43.456 Command Effects Log Page: Supported 00:24:43.456 Get Log Page Extended Data: Supported 00:24:43.456 Telemetry Log Pages: Not Supported 00:24:43.456 Persistent Event Log Pages: Not Supported 00:24:43.456 Supported Log Pages Log Page: May Support 00:24:43.456 Commands Supported & Effects Log Page: Not Supported 00:24:43.456 Feature Identifiers & Effects Log Page:May Support 00:24:43.456 NVMe-MI Commands & Effects Log Page: May Support 00:24:43.456 Data Area 4 for Telemetry Log: Not Supported 00:24:43.456 Error Log Page Entries Supported: 128 00:24:43.456 Keep Alive: Supported 00:24:43.456 Keep Alive Granularity: 10000 ms 00:24:43.456 00:24:43.456 NVM Command Set Attributes 00:24:43.456 ========================== 00:24:43.456 Submission Queue Entry Size 00:24:43.456 Max: 64 00:24:43.456 Min: 64 00:24:43.456 Completion Queue Entry Size 00:24:43.456 Max: 16 00:24:43.456 Min: 16 00:24:43.456 Number of Namespaces: 32 00:24:43.456 Compare Command: Supported 00:24:43.456 Write Uncorrectable Command: Not Supported 00:24:43.456 Dataset Management Command: Supported 00:24:43.456 Write Zeroes Command: Supported 00:24:43.456 Set Features Save Field: Not Supported 00:24:43.456 Reservations: Supported 00:24:43.456 Timestamp: Not Supported 00:24:43.456 Copy: Supported 00:24:43.456 Volatile Write Cache: Present 00:24:43.456 Atomic Write Unit (Normal): 1 00:24:43.456 Atomic Write Unit (PFail): 1 00:24:43.456 Atomic Compare & Write Unit: 1 00:24:43.456 Fused Compare & Write: Supported 00:24:43.456 Scatter-Gather List 00:24:43.456 SGL Command Set: Supported 00:24:43.456 SGL Keyed: Supported 00:24:43.456 SGL Bit Bucket Descriptor: Not Supported 00:24:43.456 SGL Metadata Pointer: Not Supported 00:24:43.456 Oversized SGL: Not Supported 00:24:43.456 SGL Metadata Address: Not Supported 00:24:43.456 SGL Offset: Supported 00:24:43.456 Transport SGL Data Block: Not Supported 00:24:43.456 Replay Protected Memory Block: Not Supported 00:24:43.456 00:24:43.456 Firmware Slot Information 00:24:43.456 ========================= 00:24:43.456 Active slot: 1 00:24:43.456 Slot 1 Firmware Revision: 25.01 00:24:43.456 00:24:43.456 00:24:43.456 Commands Supported and Effects 00:24:43.456 ============================== 00:24:43.456 Admin Commands 00:24:43.456 -------------- 00:24:43.456 Get Log Page (02h): Supported 00:24:43.456 Identify (06h): Supported 00:24:43.456 Abort (08h): Supported 00:24:43.456 Set Features (09h): Supported 00:24:43.456 Get Features (0Ah): Supported 00:24:43.456 Asynchronous Event Request (0Ch): Supported 00:24:43.456 Keep Alive (18h): Supported 00:24:43.456 I/O Commands 00:24:43.456 ------------ 00:24:43.456 Flush (00h): Supported LBA-Change 00:24:43.456 Write (01h): Supported LBA-Change 00:24:43.456 Read (02h): Supported 00:24:43.456 Compare (05h): Supported 00:24:43.456 Write Zeroes (08h): Supported LBA-Change 00:24:43.456 Dataset Management (09h): Supported LBA-Change 00:24:43.456 Copy (19h): Supported LBA-Change 00:24:43.456 00:24:43.456 Error Log 00:24:43.456 ========= 00:24:43.456 00:24:43.456 Arbitration 00:24:43.456 =========== 00:24:43.456 Arbitration Burst: 1 00:24:43.456 00:24:43.456 Power Management 00:24:43.456 ================ 00:24:43.456 Number of Power States: 1 00:24:43.456 Current Power State: Power State #0 00:24:43.456 Power State #0: 00:24:43.456 Max Power: 0.00 W 00:24:43.456 Non-Operational State: Operational 00:24:43.456 Entry Latency: Not Reported 00:24:43.456 Exit Latency: Not Reported 00:24:43.456 Relative Read Throughput: 0 00:24:43.456 Relative Read Latency: 0 00:24:43.456 Relative Write Throughput: 0 00:24:43.456 Relative Write Latency: 0 00:24:43.456 Idle Power: Not Reported 00:24:43.456 Active Power: Not Reported 00:24:43.456 Non-Operational Permissive Mode: Not Supported 00:24:43.456 00:24:43.456 Health Information 00:24:43.456 ================== 00:24:43.456 Critical Warnings: 00:24:43.456 Available Spare Space: OK 00:24:43.456 Temperature: OK 00:24:43.456 Device Reliability: OK 00:24:43.456 Read Only: No 00:24:43.456 Volatile Memory Backup: OK 00:24:43.456 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:43.456 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:43.456 Available Spare: 0% 00:24:43.456 Available Spare Threshold: 0% 00:24:43.456 Life Percentage Used:[2024-11-27 06:27:02.946164] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.456 [2024-11-27 06:27:02.946166] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.456 [2024-11-27 06:27:02.946168] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d540) on tqpair=0x9e9260 00:24:43.456 [2024-11-27 06:27:02.946172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.456 [2024-11-27 06:27:02.946176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.456 [2024-11-27 06:27:02.946177] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.456 [2024-11-27 06:27:02.946179] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d6c0) on tqpair=0x9e9260 00:24:43.457 [2024-11-27 06:27:02.946251] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.457 [2024-11-27 06:27:02.946254] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x9e9260) 00:24:43.457 [2024-11-27 06:27:02.946259] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.457 [2024-11-27 06:27:02.946269] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d6c0, cid 7, qid 0 00:24:43.457 [2024-11-27 06:27:02.946318] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.457 [2024-11-27 06:27:02.946322] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.457 [2024-11-27 06:27:02.946324] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.457 [2024-11-27 06:27:02.946326] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d6c0) on tqpair=0x9e9260 00:24:43.457 [2024-11-27 06:27:02.946347] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:24:43.457 [2024-11-27 06:27:02.946353] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3cc40) on tqpair=0x9e9260 00:24:43.457 [2024-11-27 06:27:02.946356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.457 [2024-11-27 06:27:02.946359] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3cdc0) on tqpair=0x9e9260 00:24:43.457 [2024-11-27 06:27:02.946362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.457 [2024-11-27 06:27:02.946364] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3cf40) on tqpair=0x9e9260 00:24:43.457 [2024-11-27 06:27:02.946367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.457 [2024-11-27 06:27:02.946369] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d0c0) on tqpair=0x9e9260 00:24:43.457 [2024-11-27 06:27:02.946372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.457 [2024-11-27 06:27:02.946376] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.457 [2024-11-27 06:27:02.946378] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.457 [2024-11-27 06:27:02.946380] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9e9260) 00:24:43.457 [2024-11-27 06:27:02.946385] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.457 [2024-11-27 06:27:02.946394] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d0c0, cid 3, qid 0 00:24:43.457 [2024-11-27 06:27:02.946436] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.457 [2024-11-27 06:27:02.946440] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.457 [2024-11-27 06:27:02.946441] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.457 [2024-11-27 06:27:02.946444] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d0c0) on tqpair=0x9e9260 00:24:43.457 [2024-11-27 06:27:02.946448] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.457 [2024-11-27 06:27:02.946450] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.457 [2024-11-27 06:27:02.946452] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9e9260) 00:24:43.457 [2024-11-27 06:27:02.946456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.457 [2024-11-27 06:27:02.946466] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d0c0, cid 3, qid 0 00:24:43.457 [2024-11-27 06:27:02.946517] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.457 [2024-11-27 06:27:02.946520] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.457 [2024-11-27 06:27:02.946522] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.457 [2024-11-27 06:27:02.946524] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d0c0) on tqpair=0x9e9260 00:24:43.457 [2024-11-27 06:27:02.946527] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:24:43.457 [2024-11-27 06:27:02.946530] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:24:43.457 [2024-11-27 06:27:02.946535] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.457 [2024-11-27 06:27:02.946537] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.457 [2024-11-27 06:27:02.946539] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9e9260) 00:24:43.457 [2024-11-27 06:27:02.946543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.457 [2024-11-27 06:27:02.946551] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d0c0, cid 3, qid 0 00:24:43.457 [2024-11-27 06:27:02.946589] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.457 [2024-11-27 06:27:02.946597] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.457 [2024-11-27 06:27:02.946599] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.457 [2024-11-27 06:27:02.946601] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d0c0) on tqpair=0x9e9260 00:24:43.457 [2024-11-27 06:27:02.946607] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.457 [2024-11-27 06:27:02.946609] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.457 [2024-11-27 06:27:02.946611] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9e9260) 00:24:43.457 [2024-11-27 06:27:02.946615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.457 [2024-11-27 06:27:02.946623] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d0c0, cid 3, qid 0 00:24:43.457 [2024-11-27 06:27:02.946662] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.457 [2024-11-27 06:27:02.946669] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.457 [2024-11-27 06:27:02.946671] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.457 [2024-11-27 06:27:02.946673] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d0c0) on tqpair=0x9e9260 00:24:43.457 [2024-11-27 06:27:02.946679] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.457 [2024-11-27 06:27:02.946681] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.457 [2024-11-27 06:27:02.946683] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9e9260) 00:24:43.457 [2024-11-27 06:27:02.946687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.457 [2024-11-27 06:27:02.946696] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d0c0, cid 3, qid 0 00:24:43.457 [2024-11-27 06:27:02.946737] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.457 [2024-11-27 06:27:02.946742] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.457 [2024-11-27 06:27:02.946744] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.457 [2024-11-27 06:27:02.946746] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d0c0) on tqpair=0x9e9260 00:24:43.457 [2024-11-27 06:27:02.946752] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.457 [2024-11-27 06:27:02.946754] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.457 [2024-11-27 06:27:02.946756] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9e9260) 00:24:43.457 [2024-11-27 06:27:02.946760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.457 [2024-11-27 06:27:02.946768] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d0c0, cid 3, qid 0 00:24:43.457 [2024-11-27 06:27:02.946811] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.457 [2024-11-27 06:27:02.946818] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.457 [2024-11-27 06:27:02.946820] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.457 [2024-11-27 06:27:02.946822] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d0c0) on tqpair=0x9e9260 00:24:43.457 [2024-11-27 06:27:02.946828] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.457 [2024-11-27 06:27:02.946830] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.457 [2024-11-27 06:27:02.946832] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9e9260) 00:24:43.457 [2024-11-27 06:27:02.946836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.457 [2024-11-27 06:27:02.946845] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d0c0, cid 3, qid 0 00:24:43.457 [2024-11-27 06:27:02.946888] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.457 [2024-11-27 06:27:02.946891] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.457 [2024-11-27 06:27:02.946893] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.458 [2024-11-27 06:27:02.946895] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d0c0) on tqpair=0x9e9260 00:24:43.458 [2024-11-27 06:27:02.946901] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.458 [2024-11-27 06:27:02.946911] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.458 [2024-11-27 06:27:02.946913] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9e9260) 00:24:43.458 [2024-11-27 06:27:02.946917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.458 [2024-11-27 06:27:02.946926] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d0c0, cid 3, qid 0 00:24:43.458 [2024-11-27 06:27:02.946968] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.458 [2024-11-27 06:27:02.946972] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.458 [2024-11-27 06:27:02.946974] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.458 [2024-11-27 06:27:02.946976] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d0c0) on tqpair=0x9e9260 00:24:43.458 [2024-11-27 06:27:02.946982] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.458 [2024-11-27 06:27:02.946984] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.458 [2024-11-27 06:27:02.946985] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9e9260) 00:24:43.458 [2024-11-27 06:27:02.946990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.458 [2024-11-27 06:27:02.946999] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d0c0, cid 3, qid 0 00:24:43.458 [2024-11-27 06:27:02.947038] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.458 [2024-11-27 06:27:02.947041] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.458 [2024-11-27 06:27:02.947043] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.458 [2024-11-27 06:27:02.947045] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d0c0) on tqpair=0x9e9260 00:24:43.458 [2024-11-27 06:27:02.947051] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.458 [2024-11-27 06:27:02.947053] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.458 [2024-11-27 06:27:02.947055] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9e9260) 00:24:43.458 [2024-11-27 06:27:02.947059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.458 [2024-11-27 06:27:02.947067] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d0c0, cid 3, qid 0 00:24:43.458 [2024-11-27 06:27:02.947108] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.458 [2024-11-27 06:27:02.947112] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.458 [2024-11-27 06:27:02.947114] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.458 [2024-11-27 06:27:02.947116] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d0c0) on tqpair=0x9e9260 00:24:43.458 [2024-11-27 06:27:02.947122] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.458 [2024-11-27 06:27:02.947124] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.458 [2024-11-27 06:27:02.947126] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9e9260) 00:24:43.458 [2024-11-27 06:27:02.947130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.458 [2024-11-27 06:27:02.947138] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d0c0, cid 3, qid 0 00:24:43.458 [2024-11-27 06:27:02.947180] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.458 [2024-11-27 06:27:02.947184] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.458 [2024-11-27 06:27:02.947185] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.458 [2024-11-27 06:27:02.947188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d0c0) on tqpair=0x9e9260 00:24:43.458 [2024-11-27 06:27:02.947193] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.458 [2024-11-27 06:27:02.947195] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.458 [2024-11-27 06:27:02.947197] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9e9260) 00:24:43.458 [2024-11-27 06:27:02.947201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.458 [2024-11-27 06:27:02.947209] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d0c0, cid 3, qid 0 00:24:43.458 [2024-11-27 06:27:02.947247] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.458 [2024-11-27 06:27:02.947254] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.458 [2024-11-27 06:27:02.947256] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.458 [2024-11-27 06:27:02.947258] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d0c0) on tqpair=0x9e9260 00:24:43.458 [2024-11-27 06:27:02.947263] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.458 [2024-11-27 06:27:02.947266] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.458 [2024-11-27 06:27:02.947267] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9e9260) 00:24:43.458 [2024-11-27 06:27:02.947272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.458 [2024-11-27 06:27:02.947280] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d0c0, cid 3, qid 0 00:24:43.458 [2024-11-27 06:27:02.947317] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.458 [2024-11-27 06:27:02.947321] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.458 [2024-11-27 06:27:02.947323] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.458 [2024-11-27 06:27:02.947325] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d0c0) on tqpair=0x9e9260 00:24:43.458 [2024-11-27 06:27:02.947331] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.458 [2024-11-27 06:27:02.947333] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.458 [2024-11-27 06:27:02.947335] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9e9260) 00:24:43.458 [2024-11-27 06:27:02.947339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.458 [2024-11-27 06:27:02.947347] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d0c0, cid 3, qid 0 00:24:43.458 [2024-11-27 06:27:02.947385] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.458 [2024-11-27 06:27:02.947390] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.458 [2024-11-27 06:27:02.947392] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.458 [2024-11-27 06:27:02.947394] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d0c0) on tqpair=0x9e9260 00:24:43.458 [2024-11-27 06:27:02.947400] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.458 [2024-11-27 06:27:02.947402] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.458 [2024-11-27 06:27:02.947404] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9e9260) 00:24:43.458 [2024-11-27 06:27:02.947408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.458 [2024-11-27 06:27:02.947416] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d0c0, cid 3, qid 0 00:24:43.458 [2024-11-27 06:27:02.947455] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.458 [2024-11-27 06:27:02.947459] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.458 [2024-11-27 06:27:02.947461] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.458 [2024-11-27 06:27:02.947463] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d0c0) on tqpair=0x9e9260 00:24:43.458 [2024-11-27 06:27:02.947468] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.458 [2024-11-27 06:27:02.947470] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.458 [2024-11-27 06:27:02.947472] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9e9260) 00:24:43.458 [2024-11-27 06:27:02.947476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.458 [2024-11-27 06:27:02.947484] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d0c0, cid 3, qid 0 00:24:43.458 [2024-11-27 06:27:02.947530] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.458 [2024-11-27 06:27:02.947534] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.458 [2024-11-27 06:27:02.947536] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.458 [2024-11-27 06:27:02.947538] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d0c0) on tqpair=0x9e9260 00:24:43.458 [2024-11-27 06:27:02.947544] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.458 [2024-11-27 06:27:02.947546] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.458 [2024-11-27 06:27:02.947548] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9e9260) 00:24:43.458 [2024-11-27 06:27:02.947552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.458 [2024-11-27 06:27:02.947560] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d0c0, cid 3, qid 0 00:24:43.459 [2024-11-27 06:27:02.947598] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.459 [2024-11-27 06:27:02.947602] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.459 [2024-11-27 06:27:02.947603] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.459 [2024-11-27 06:27:02.947605] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d0c0) on tqpair=0x9e9260 00:24:43.459 [2024-11-27 06:27:02.947611] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.459 [2024-11-27 06:27:02.947613] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.459 [2024-11-27 06:27:02.947615] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9e9260) 00:24:43.459 [2024-11-27 06:27:02.947619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.459 [2024-11-27 06:27:02.947627] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d0c0, cid 3, qid 0 00:24:43.459 [2024-11-27 06:27:02.947665] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.459 [2024-11-27 06:27:02.947669] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.459 [2024-11-27 06:27:02.947671] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.459 [2024-11-27 06:27:02.947673] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d0c0) on tqpair=0x9e9260 00:24:43.459 [2024-11-27 06:27:02.947679] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.459 [2024-11-27 06:27:02.947681] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.459 [2024-11-27 06:27:02.947683] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9e9260) 00:24:43.459 [2024-11-27 06:27:02.947687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.459 [2024-11-27 06:27:02.947696] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d0c0, cid 3, qid 0 00:24:43.459 [2024-11-27 06:27:02.947737] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.459 [2024-11-27 06:27:02.947741] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.459 [2024-11-27 06:27:02.947743] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.459 [2024-11-27 06:27:02.947745] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d0c0) on tqpair=0x9e9260 00:24:43.459 [2024-11-27 06:27:02.947751] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.459 [2024-11-27 06:27:02.947753] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.459 [2024-11-27 06:27:02.947755] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9e9260) 00:24:43.459 [2024-11-27 06:27:02.947759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.459 [2024-11-27 06:27:02.947767] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d0c0, cid 3, qid 0 00:24:43.459 [2024-11-27 06:27:02.947812] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.459 [2024-11-27 06:27:02.947819] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.459 [2024-11-27 06:27:02.947820] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.459 [2024-11-27 06:27:02.947823] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d0c0) on tqpair=0x9e9260 00:24:43.459 [2024-11-27 06:27:02.947828] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.459 [2024-11-27 06:27:02.947830] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.459 [2024-11-27 06:27:02.947832] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9e9260) 00:24:43.459 [2024-11-27 06:27:02.947837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.459 [2024-11-27 06:27:02.947845] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d0c0, cid 3, qid 0 00:24:43.459 [2024-11-27 06:27:02.947884] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.459 [2024-11-27 06:27:02.947888] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.459 [2024-11-27 06:27:02.947889] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.459 [2024-11-27 06:27:02.947892] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d0c0) on tqpair=0x9e9260 00:24:43.459 [2024-11-27 06:27:02.947897] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:43.459 [2024-11-27 06:27:02.947899] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:43.459 [2024-11-27 06:27:02.947901] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9e9260) 00:24:43.459 [2024-11-27 06:27:02.951913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.459 [2024-11-27 06:27:02.951930] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa3d0c0, cid 3, qid 0 00:24:43.459 [2024-11-27 06:27:02.951980] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:43.459 [2024-11-27 06:27:02.951984] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:43.459 [2024-11-27 06:27:02.951986] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:43.459 [2024-11-27 06:27:02.951988] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa3d0c0) on tqpair=0x9e9260 00:24:43.459 [2024-11-27 06:27:02.951993] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:24:43.459 0% 00:24:43.459 Data Units Read: 0 00:24:43.459 Data Units Written: 0 00:24:43.459 Host Read Commands: 0 00:24:43.459 Host Write Commands: 0 00:24:43.459 Controller Busy Time: 0 minutes 00:24:43.459 Power Cycles: 0 00:24:43.459 Power On Hours: 0 hours 00:24:43.459 Unsafe Shutdowns: 0 00:24:43.459 Unrecoverable Media Errors: 0 00:24:43.459 Lifetime Error Log Entries: 0 00:24:43.459 Warning Temperature Time: 0 minutes 00:24:43.459 Critical Temperature Time: 0 minutes 00:24:43.459 00:24:43.459 Number of Queues 00:24:43.459 ================ 00:24:43.459 Number of I/O Submission Queues: 127 00:24:43.459 Number of I/O Completion Queues: 127 00:24:43.459 00:24:43.459 Active Namespaces 00:24:43.459 ================= 00:24:43.459 Namespace ID:1 00:24:43.459 Error Recovery Timeout: Unlimited 00:24:43.459 Command Set Identifier: NVM (00h) 00:24:43.459 Deallocate: Supported 00:24:43.459 Deallocated/Unwritten Error: Not Supported 00:24:43.459 Deallocated Read Value: Unknown 00:24:43.459 Deallocate in Write Zeroes: Not Supported 00:24:43.459 Deallocated Guard Field: 0xFFFF 00:24:43.459 Flush: Supported 00:24:43.459 Reservation: Supported 00:24:43.459 Namespace Sharing Capabilities: Multiple Controllers 00:24:43.459 Size (in LBAs): 131072 (0GiB) 00:24:43.459 Capacity (in LBAs): 131072 (0GiB) 00:24:43.459 Utilization (in LBAs): 131072 (0GiB) 00:24:43.459 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:43.459 EUI64: ABCDEF0123456789 00:24:43.459 UUID: 0b3aa5c8-2449-4085-983e-3c16b9a74e88 00:24:43.459 Thin Provisioning: Not Supported 00:24:43.459 Per-NS Atomic Units: Yes 00:24:43.459 Atomic Boundary Size (Normal): 0 00:24:43.459 Atomic Boundary Size (PFail): 0 00:24:43.459 Atomic Boundary Offset: 0 00:24:43.459 Maximum Single Source Range Length: 65535 00:24:43.459 Maximum Copy Length: 65535 00:24:43.459 Maximum Source Range Count: 1 00:24:43.459 NGUID/EUI64 Never Reused: No 00:24:43.459 Namespace Write Protected: No 00:24:43.459 Number of LBA Formats: 1 00:24:43.459 Current LBA Format: LBA Format #00 00:24:43.459 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:43.459 00:24:43.718 06:27:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:43.718 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:43.718 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.718 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:43.718 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.718 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:43.718 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:43.718 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:43.718 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:24:43.718 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:43.718 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:24:43.718 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:43.718 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:43.718 rmmod nvme_tcp 00:24:43.719 rmmod nvme_fabrics 00:24:43.719 rmmod nvme_keyring 00:24:43.719 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:43.719 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:24:43.719 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:24:43.719 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 105430 ']' 00:24:43.719 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 105430 00:24:43.719 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 105430 ']' 00:24:43.719 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 105430 00:24:43.719 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:24:43.719 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:43.719 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105430 00:24:43.719 killing process with pid 105430 00:24:43.719 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:43.719 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:43.719 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105430' 00:24:43.719 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 105430 00:24:43.719 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 105430 00:24:43.719 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:43.719 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:43.719 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:43.719 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:24:43.719 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:24:43.719 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:43.719 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:24:43.719 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:43.719 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:43.719 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:43.719 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:43.719 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:43.977 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:43.977 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:43.977 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:43.977 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:43.977 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:43.977 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:43.977 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:43.977 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:43.977 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:43.977 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:43.977 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:43.977 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.977 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:43.977 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.977 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:24:43.977 00:24:43.977 real 0m2.389s 00:24:43.977 user 0m6.383s 00:24:43.977 sys 0m0.511s 00:24:43.977 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:43.977 ************************************ 00:24:43.977 END TEST nvmf_identify 00:24:43.977 ************************************ 00:24:43.977 06:27:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:43.977 06:27:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:43.977 06:27:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:43.977 06:27:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:43.977 06:27:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.977 ************************************ 00:24:43.977 START TEST nvmf_perf 00:24:43.977 ************************************ 00:24:43.977 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:44.237 * Looking for test storage... 00:24:44.237 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:44.237 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:44.237 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:44.237 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:24:44.237 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:44.237 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:44.237 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:44.237 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:44.237 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:24:44.237 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:24:44.237 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:24:44.237 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:24:44.237 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:24:44.237 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:24:44.237 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:24:44.237 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:44.237 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:24:44.237 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:24:44.237 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:44.237 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:44.237 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:24:44.237 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:24:44.237 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:44.237 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:24:44.237 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:44.237 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:24:44.237 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:24:44.237 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:44.237 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:24:44.237 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:44.237 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:44.237 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:44.237 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:24:44.237 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:44.237 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:44.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.237 --rc genhtml_branch_coverage=1 00:24:44.237 --rc genhtml_function_coverage=1 00:24:44.237 --rc genhtml_legend=1 00:24:44.237 --rc geninfo_all_blocks=1 00:24:44.237 --rc geninfo_unexecuted_blocks=1 00:24:44.237 00:24:44.237 ' 00:24:44.237 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:44.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.237 --rc genhtml_branch_coverage=1 00:24:44.237 --rc genhtml_function_coverage=1 00:24:44.237 --rc genhtml_legend=1 00:24:44.237 --rc geninfo_all_blocks=1 00:24:44.237 --rc geninfo_unexecuted_blocks=1 00:24:44.237 00:24:44.237 ' 00:24:44.237 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:44.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.237 --rc genhtml_branch_coverage=1 00:24:44.237 --rc genhtml_function_coverage=1 00:24:44.237 --rc genhtml_legend=1 00:24:44.237 --rc geninfo_all_blocks=1 00:24:44.237 --rc geninfo_unexecuted_blocks=1 00:24:44.237 00:24:44.237 ' 00:24:44.237 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:44.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.237 --rc genhtml_branch_coverage=1 00:24:44.238 --rc genhtml_function_coverage=1 00:24:44.238 --rc genhtml_legend=1 00:24:44.238 --rc geninfo_all_blocks=1 00:24:44.238 --rc geninfo_unexecuted_blocks=1 00:24:44.238 00:24:44.238 ' 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:44.238 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:44.238 Cannot find device "nvmf_init_br" 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:44.238 Cannot find device "nvmf_init_br2" 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:44.238 Cannot find device "nvmf_tgt_br" 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:44.238 Cannot find device "nvmf_tgt_br2" 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:44.238 Cannot find device "nvmf_init_br" 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:44.238 Cannot find device "nvmf_init_br2" 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:44.238 Cannot find device "nvmf_tgt_br" 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:44.238 Cannot find device "nvmf_tgt_br2" 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:44.238 Cannot find device "nvmf_br" 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:44.238 Cannot find device "nvmf_init_if" 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:44.238 Cannot find device "nvmf_init_if2" 00:24:44.238 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:24:44.239 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:44.239 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:44.239 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:24:44.239 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:44.239 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:44.239 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:24:44.239 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:44.239 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:44.239 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:44.239 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:44.497 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:44.497 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:44.497 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:44.497 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:44.497 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:44.497 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:44.497 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:44.497 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:44.497 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:44.497 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:44.497 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:44.497 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:44.497 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:44.497 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:44.497 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:44.497 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:44.497 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:44.498 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:44.498 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:44.498 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:44.498 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:44.498 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:44.498 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:44.498 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:44.498 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:44.498 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:44.498 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:44.498 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:44.498 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:44.498 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:44.498 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:24:44.498 00:24:44.498 --- 10.0.0.3 ping statistics --- 00:24:44.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.498 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:24:44.498 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:44.498 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:44.498 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:24:44.498 00:24:44.498 --- 10.0.0.4 ping statistics --- 00:24:44.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.498 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:24:44.498 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:44.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:44.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:24:44.498 00:24:44.498 --- 10.0.0.1 ping statistics --- 00:24:44.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.498 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:24:44.498 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:44.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:44.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.029 ms 00:24:44.498 00:24:44.498 --- 10.0.0.2 ping statistics --- 00:24:44.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.498 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:24:44.498 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:44.498 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:24:44.498 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:44.498 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:44.498 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:44.498 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:44.498 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:44.498 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:44.498 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:44.498 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:44.498 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:44.498 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:44.498 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:44.498 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=105699 00:24:44.498 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 105699 00:24:44.498 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 105699 ']' 00:24:44.498 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:44.498 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.498 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:44.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.498 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.498 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:44.498 06:27:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:44.498 [2024-11-27 06:27:03.973990] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:24:44.498 [2024-11-27 06:27:03.974052] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:44.756 [2024-11-27 06:27:04.091728] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:44.756 [2024-11-27 06:27:04.116993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:44.756 [2024-11-27 06:27:04.135492] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:44.756 [2024-11-27 06:27:04.135528] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:44.756 [2024-11-27 06:27:04.135535] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:44.757 [2024-11-27 06:27:04.135540] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:44.757 [2024-11-27 06:27:04.135545] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:44.757 [2024-11-27 06:27:04.136197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.757 [2024-11-27 06:27:04.136273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:44.757 [2024-11-27 06:27:04.136481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.757 [2024-11-27 06:27:04.136483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:45.323 06:27:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:45.323 06:27:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:24:45.323 06:27:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:45.323 06:27:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:45.323 06:27:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:45.581 06:27:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:45.581 06:27:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:24:45.581 06:27:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:24:45.839 06:27:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:45.839 06:27:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:24:46.098 06:27:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:24:46.098 06:27:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:46.358 06:27:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:46.358 06:27:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:24:46.358 06:27:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:46.358 06:27:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:46.358 06:27:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:46.358 [2024-11-27 06:27:05.861983] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:46.635 06:27:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:46.635 06:27:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:46.635 06:27:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:46.922 06:27:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:46.922 06:27:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:47.181 06:27:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:47.439 [2024-11-27 06:27:06.795136] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:47.439 06:27:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:24:47.697 06:27:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:24:47.697 06:27:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:24:47.697 06:27:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:47.697 06:27:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:24:48.630 Initializing NVMe Controllers 00:24:48.630 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:24:48.630 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:24:48.630 Initialization complete. Launching workers. 00:24:48.630 ======================================================== 00:24:48.630 Latency(us) 00:24:48.630 Device Information : IOPS MiB/s Average min max 00:24:48.630 PCIE (0000:00:10.0) NSID 1 from core 0: 31932.58 124.74 1001.84 245.69 5757.76 00:24:48.630 ======================================================== 00:24:48.630 Total : 31932.58 124.74 1001.84 245.69 5757.76 00:24:48.630 00:24:48.630 06:27:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:24:50.006 Initializing NVMe Controllers 00:24:50.006 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:24:50.006 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:50.006 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:50.006 Initialization complete. Launching workers. 00:24:50.006 ======================================================== 00:24:50.006 Latency(us) 00:24:50.006 Device Information : IOPS MiB/s Average min max 00:24:50.006 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6018.93 23.51 165.25 62.61 4152.40 00:24:50.006 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8128.54 5011.29 12018.93 00:24:50.006 ======================================================== 00:24:50.006 Total : 6142.93 24.00 325.99 62.61 12018.93 00:24:50.006 00:24:50.006 06:27:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:24:51.446 Initializing NVMe Controllers 00:24:51.446 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:24:51.446 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:51.446 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:51.446 Initialization complete. Launching workers. 00:24:51.446 ======================================================== 00:24:51.446 Latency(us) 00:24:51.446 Device Information : IOPS MiB/s Average min max 00:24:51.446 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7787.44 30.42 4110.50 711.08 15671.13 00:24:51.446 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2398.66 9.37 13456.31 5752.21 35825.64 00:24:51.446 ======================================================== 00:24:51.447 Total : 10186.10 39.79 6311.29 711.08 35825.64 00:24:51.447 00:24:51.447 06:27:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:24:51.447 06:27:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:24:53.978 Initializing NVMe Controllers 00:24:53.978 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:24:53.978 Controller IO queue size 128, less than required. 00:24:53.978 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:53.978 Controller IO queue size 128, less than required. 00:24:53.978 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:53.978 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:53.978 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:53.978 Initialization complete. Launching workers. 00:24:53.978 ======================================================== 00:24:53.978 Latency(us) 00:24:53.978 Device Information : IOPS MiB/s Average min max 00:24:53.978 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1790.35 447.59 72143.73 51216.26 132197.73 00:24:53.978 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 618.26 154.56 219745.62 121817.69 349137.88 00:24:53.978 ======================================================== 00:24:53.978 Total : 2408.61 602.15 110031.17 51216.26 349137.88 00:24:53.978 00:24:53.978 06:27:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:24:54.237 Initializing NVMe Controllers 00:24:54.237 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:24:54.237 Controller IO queue size 128, less than required. 00:24:54.237 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:54.237 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:54.237 Controller IO queue size 128, less than required. 00:24:54.237 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:54.237 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:24:54.237 WARNING: Some requested NVMe devices were skipped 00:24:54.237 No valid NVMe controllers or AIO or URING devices found 00:24:54.237 06:27:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:24:56.814 Initializing NVMe Controllers 00:24:56.814 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:24:56.814 Controller IO queue size 128, less than required. 00:24:56.814 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:56.814 Controller IO queue size 128, less than required. 00:24:56.814 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:56.814 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:56.814 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:56.814 Initialization complete. Launching workers. 00:24:56.814 00:24:56.814 ==================== 00:24:56.814 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:56.814 TCP transport: 00:24:56.814 polls: 11486 00:24:56.814 idle_polls: 5935 00:24:56.814 sock_completions: 5551 00:24:56.814 nvme_completions: 8211 00:24:56.814 submitted_requests: 12342 00:24:56.814 queued_requests: 1 00:24:56.814 00:24:56.814 ==================== 00:24:56.814 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:56.814 TCP transport: 00:24:56.814 polls: 11612 00:24:56.814 idle_polls: 6738 00:24:56.814 sock_completions: 4874 00:24:56.814 nvme_completions: 9473 00:24:56.814 submitted_requests: 14284 00:24:56.814 queued_requests: 1 00:24:56.814 ======================================================== 00:24:56.814 Latency(us) 00:24:56.814 Device Information : IOPS MiB/s Average min max 00:24:56.814 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2052.33 513.08 63280.64 36813.62 121280.61 00:24:56.814 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2367.81 591.95 54311.94 21094.29 82305.23 00:24:56.814 ======================================================== 00:24:56.814 Total : 4420.14 1105.03 58476.23 21094.29 121280.61 00:24:56.814 00:24:56.815 06:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:56.815 06:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:57.074 06:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:24:57.074 06:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:24:57.074 06:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:24:57.335 06:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=8ae2a2ac-e59b-4db6-9d28-fed347fafc0e 00:24:57.335 06:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 8ae2a2ac-e59b-4db6-9d28-fed347fafc0e 00:24:57.335 06:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=8ae2a2ac-e59b-4db6-9d28-fed347fafc0e 00:24:57.335 06:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:24:57.335 06:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:24:57.335 06:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:24:57.335 06:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:57.335 06:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:24:57.335 { 00:24:57.335 "base_bdev": "Nvme0n1", 00:24:57.335 "block_size": 4096, 00:24:57.335 "cluster_size": 4194304, 00:24:57.335 "free_clusters": 1278, 00:24:57.335 "name": "lvs_0", 00:24:57.335 "total_data_clusters": 1278, 00:24:57.335 "uuid": "8ae2a2ac-e59b-4db6-9d28-fed347fafc0e" 00:24:57.335 } 00:24:57.335 ]' 00:24:57.335 06:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="8ae2a2ac-e59b-4db6-9d28-fed347fafc0e") .free_clusters' 00:24:57.596 06:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=1278 00:24:57.596 06:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="8ae2a2ac-e59b-4db6-9d28-fed347fafc0e") .cluster_size' 00:24:57.596 5112 00:24:57.596 06:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:24:57.596 06:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=5112 00:24:57.596 06:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 5112 00:24:57.596 06:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:24:57.596 06:27:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8ae2a2ac-e59b-4db6-9d28-fed347fafc0e lbd_0 5112 00:24:57.596 06:27:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=106c3d7c-ed95-4c9c-8fa9-b48030224afb 00:24:57.596 06:27:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 106c3d7c-ed95-4c9c-8fa9-b48030224afb lvs_n_0 00:24:57.856 06:27:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=83748d2c-34a3-4665-b0fc-1ebe240b0715 00:24:57.856 06:27:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 83748d2c-34a3-4665-b0fc-1ebe240b0715 00:24:57.856 06:27:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=83748d2c-34a3-4665-b0fc-1ebe240b0715 00:24:57.856 06:27:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:24:57.856 06:27:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:24:57.856 06:27:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:24:57.856 06:27:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:58.116 06:27:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:24:58.116 { 00:24:58.116 "base_bdev": "Nvme0n1", 00:24:58.116 "block_size": 4096, 00:24:58.116 "cluster_size": 4194304, 00:24:58.116 "free_clusters": 0, 00:24:58.116 "name": "lvs_0", 00:24:58.116 "total_data_clusters": 1278, 00:24:58.116 "uuid": "8ae2a2ac-e59b-4db6-9d28-fed347fafc0e" 00:24:58.116 }, 00:24:58.116 { 00:24:58.117 "base_bdev": "106c3d7c-ed95-4c9c-8fa9-b48030224afb", 00:24:58.117 "block_size": 4096, 00:24:58.117 "cluster_size": 4194304, 00:24:58.117 "free_clusters": 1276, 00:24:58.117 "name": "lvs_n_0", 00:24:58.117 "total_data_clusters": 1276, 00:24:58.117 "uuid": "83748d2c-34a3-4665-b0fc-1ebe240b0715" 00:24:58.117 } 00:24:58.117 ]' 00:24:58.117 06:27:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="83748d2c-34a3-4665-b0fc-1ebe240b0715") .free_clusters' 00:24:58.117 06:27:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=1276 00:24:58.117 06:27:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="83748d2c-34a3-4665-b0fc-1ebe240b0715") .cluster_size' 00:24:58.117 5104 00:24:58.117 06:27:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:24:58.117 06:27:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=5104 00:24:58.117 06:27:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 5104 00:24:58.117 06:27:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:24:58.117 06:27:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 83748d2c-34a3-4665-b0fc-1ebe240b0715 lbd_nest_0 5104 00:24:58.377 06:27:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=47f62612-7c4f-46b1-8e60-aa51e5c63636 00:24:58.377 06:27:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:58.638 06:27:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:24:58.638 06:27:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 47f62612-7c4f-46b1-8e60-aa51e5c63636 00:24:58.900 06:27:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:59.161 06:27:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:24:59.161 06:27:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:24:59.161 06:27:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:24:59.161 06:27:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:24:59.161 06:27:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:24:59.422 Initializing NVMe Controllers 00:24:59.422 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:24:59.422 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:24:59.422 WARNING: Some requested NVMe devices were skipped 00:24:59.422 No valid NVMe controllers or AIO or URING devices found 00:24:59.422 06:27:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:24:59.422 06:27:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:25:11.630 Initializing NVMe Controllers 00:25:11.630 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:25:11.630 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:11.630 Initialization complete. Launching workers. 00:25:11.630 ======================================================== 00:25:11.630 Latency(us) 00:25:11.630 Device Information : IOPS MiB/s Average min max 00:25:11.630 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1329.77 166.22 751.47 238.66 9477.81 00:25:11.630 ======================================================== 00:25:11.630 Total : 1329.77 166.22 751.47 238.66 9477.81 00:25:11.630 00:25:11.630 06:27:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:25:11.630 06:27:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:11.630 06:27:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:25:11.630 Initializing NVMe Controllers 00:25:11.630 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:25:11.630 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:25:11.630 WARNING: Some requested NVMe devices were skipped 00:25:11.630 No valid NVMe controllers or AIO or URING devices found 00:25:11.630 06:27:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:11.630 06:27:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:25:21.618 Initializing NVMe Controllers 00:25:21.618 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:25:21.618 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:21.618 Initialization complete. Launching workers. 00:25:21.618 ======================================================== 00:25:21.618 Latency(us) 00:25:21.618 Device Information : IOPS MiB/s Average min max 00:25:21.618 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1064.88 133.11 30088.05 7917.97 271246.00 00:25:21.618 ======================================================== 00:25:21.618 Total : 1064.88 133.11 30088.05 7917.97 271246.00 00:25:21.618 00:25:21.618 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:25:21.618 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:21.618 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:25:21.618 Initializing NVMe Controllers 00:25:21.618 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:25:21.618 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:25:21.618 WARNING: Some requested NVMe devices were skipped 00:25:21.618 No valid NVMe controllers or AIO or URING devices found 00:25:21.618 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:21.618 06:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:25:31.586 Initializing NVMe Controllers 00:25:31.586 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:25:31.586 Controller IO queue size 128, less than required. 00:25:31.586 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:31.586 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:31.586 Initialization complete. Launching workers. 00:25:31.586 ======================================================== 00:25:31.586 Latency(us) 00:25:31.586 Device Information : IOPS MiB/s Average min max 00:25:31.586 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5186.69 648.34 24687.01 8020.83 59938.84 00:25:31.586 ======================================================== 00:25:31.586 Total : 5186.69 648.34 24687.01 8020.83 59938.84 00:25:31.586 00:25:31.586 06:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:31.586 06:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 47f62612-7c4f-46b1-8e60-aa51e5c63636 00:25:31.586 06:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:25:31.586 06:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 106c3d7c-ed95-4c9c-8fa9-b48030224afb 00:25:31.844 06:27:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:25:32.101 06:27:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:32.101 06:27:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:25:32.101 06:27:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:32.101 06:27:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:25:32.101 06:27:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:32.101 06:27:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:25:32.101 06:27:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:32.101 06:27:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:32.101 rmmod nvme_tcp 00:25:32.101 rmmod nvme_fabrics 00:25:32.101 rmmod nvme_keyring 00:25:32.359 06:27:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:32.359 06:27:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:25:32.359 06:27:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:25:32.359 06:27:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 105699 ']' 00:25:32.359 06:27:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 105699 00:25:32.359 06:27:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 105699 ']' 00:25:32.359 06:27:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 105699 00:25:32.359 06:27:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:25:32.359 06:27:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:32.359 06:27:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105699 00:25:32.359 killing process with pid 105699 00:25:32.359 06:27:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:32.359 06:27:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:32.359 06:27:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105699' 00:25:32.359 06:27:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 105699 00:25:32.359 06:27:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 105699 00:25:33.295 06:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:33.295 06:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:33.295 06:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:33.295 06:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:25:33.295 06:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:25:33.295 06:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:33.295 06:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:25:33.296 06:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:33.296 06:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:33.296 06:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:33.296 06:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:33.296 06:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:33.296 06:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:33.296 06:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:33.296 06:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:33.296 06:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:33.296 06:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:33.296 06:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:33.296 06:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:33.554 06:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:33.554 06:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:33.554 06:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:33.554 06:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:33.554 06:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:33.554 06:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:33.554 06:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.554 06:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:25:33.554 00:25:33.554 real 0m49.448s 00:25:33.554 user 3m8.142s 00:25:33.554 sys 0m8.416s 00:25:33.554 06:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:33.554 06:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:33.554 ************************************ 00:25:33.554 END TEST nvmf_perf 00:25:33.554 ************************************ 00:25:33.554 06:27:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:33.554 06:27:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:33.554 06:27:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:33.554 06:27:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.554 ************************************ 00:25:33.554 START TEST nvmf_fio_host 00:25:33.554 ************************************ 00:25:33.554 06:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:33.554 * Looking for test storage... 00:25:33.554 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:33.554 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:33.554 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:25:33.554 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:33.554 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:33.554 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:33.554 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:33.554 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:33.554 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:33.554 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:33.554 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:33.554 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:33.554 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:33.554 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:33.554 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:33.554 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:33.554 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:25:33.554 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:25:33.554 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:33.554 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:33.554 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:25:33.554 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:25:33.554 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:33.554 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:25:33.554 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:33.554 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:25:33.554 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:25:33.554 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:33.554 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:25:33.554 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:33.554 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:33.554 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:33.554 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:25:33.554 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:33.554 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:33.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.554 --rc genhtml_branch_coverage=1 00:25:33.555 --rc genhtml_function_coverage=1 00:25:33.555 --rc genhtml_legend=1 00:25:33.555 --rc geninfo_all_blocks=1 00:25:33.555 --rc geninfo_unexecuted_blocks=1 00:25:33.555 00:25:33.555 ' 00:25:33.555 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:33.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.555 --rc genhtml_branch_coverage=1 00:25:33.555 --rc genhtml_function_coverage=1 00:25:33.555 --rc genhtml_legend=1 00:25:33.555 --rc geninfo_all_blocks=1 00:25:33.555 --rc geninfo_unexecuted_blocks=1 00:25:33.555 00:25:33.555 ' 00:25:33.555 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:33.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.555 --rc genhtml_branch_coverage=1 00:25:33.555 --rc genhtml_function_coverage=1 00:25:33.555 --rc genhtml_legend=1 00:25:33.555 --rc geninfo_all_blocks=1 00:25:33.555 --rc geninfo_unexecuted_blocks=1 00:25:33.555 00:25:33.555 ' 00:25:33.555 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:33.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.555 --rc genhtml_branch_coverage=1 00:25:33.555 --rc genhtml_function_coverage=1 00:25:33.555 --rc genhtml_legend=1 00:25:33.555 --rc geninfo_all_blocks=1 00:25:33.555 --rc geninfo_unexecuted_blocks=1 00:25:33.555 00:25:33.555 ' 00:25:33.555 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:33.555 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:33.555 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:33.555 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:33.555 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:33.555 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.555 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.555 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.555 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:33.555 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.555 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:33.555 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:33.814 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:33.814 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:33.815 Cannot find device "nvmf_init_br" 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:33.815 Cannot find device "nvmf_init_br2" 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:33.815 Cannot find device "nvmf_tgt_br" 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:33.815 Cannot find device "nvmf_tgt_br2" 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:33.815 Cannot find device "nvmf_init_br" 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:33.815 Cannot find device "nvmf_init_br2" 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:33.815 Cannot find device "nvmf_tgt_br" 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:33.815 Cannot find device "nvmf_tgt_br2" 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:33.815 Cannot find device "nvmf_br" 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:33.815 Cannot find device "nvmf_init_if" 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:33.815 Cannot find device "nvmf_init_if2" 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:33.815 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:33.815 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:33.815 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:34.074 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:34.074 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:34.074 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:34.074 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:34.074 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:34.074 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:34.074 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:34.074 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:34.074 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:34.074 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:25:34.074 00:25:34.074 --- 10.0.0.3 ping statistics --- 00:25:34.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.074 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:25:34.074 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:34.074 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:34.074 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:25:34.074 00:25:34.074 --- 10.0.0.4 ping statistics --- 00:25:34.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.074 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:25:34.074 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:34.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:34.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms 00:25:34.074 00:25:34.074 --- 10.0.0.1 ping statistics --- 00:25:34.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.074 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 00:25:34.074 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:34.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:34.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:25:34.074 00:25:34.074 --- 10.0.0.2 ping statistics --- 00:25:34.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.074 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:25:34.074 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:34.074 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:25:34.074 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:34.074 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:34.074 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:34.074 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:34.074 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:34.074 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:34.074 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:34.074 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:34.074 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:34.074 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:34.074 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.074 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=106714 00:25:34.074 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:34.074 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:34.074 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 106714 00:25:34.074 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 106714 ']' 00:25:34.074 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:34.074 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:34.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:34.074 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:34.074 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:34.074 06:27:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.074 [2024-11-27 06:27:53.458423] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:25:34.074 [2024-11-27 06:27:53.458578] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:34.074 [2024-11-27 06:27:53.576852] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:34.332 [2024-11-27 06:27:53.606294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:34.332 [2024-11-27 06:27:53.626597] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:34.332 [2024-11-27 06:27:53.626852] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:34.332 [2024-11-27 06:27:53.627032] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:34.332 [2024-11-27 06:27:53.627520] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:34.332 [2024-11-27 06:27:53.627589] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:34.332 [2024-11-27 06:27:53.628453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:34.332 [2024-11-27 06:27:53.628745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:34.332 [2024-11-27 06:27:53.629424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:34.332 [2024-11-27 06:27:53.629651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.897 06:27:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:34.897 06:27:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:25:34.897 06:27:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:35.155 [2024-11-27 06:27:54.521506] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:35.155 06:27:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:35.155 06:27:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:35.155 06:27:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.155 06:27:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:35.413 Malloc1 00:25:35.413 06:27:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:35.671 06:27:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:35.930 06:27:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:35.930 [2024-11-27 06:27:55.440957] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:36.188 06:27:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:25:36.188 06:27:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:25:36.188 06:27:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:25:36.188 06:27:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:25:36.188 06:27:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:36.188 06:27:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:36.188 06:27:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:36.188 06:27:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:36.188 06:27:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:36.188 06:27:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:36.188 06:27:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:36.188 06:27:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:36.188 06:27:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:36.188 06:27:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:36.188 06:27:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:36.188 06:27:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:36.188 06:27:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:36.188 06:27:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:36.188 06:27:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:36.188 06:27:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:36.188 06:27:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:36.188 06:27:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:36.188 06:27:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:25:36.188 06:27:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:25:36.445 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:36.446 fio-3.35 00:25:36.446 Starting 1 thread 00:25:38.970 00:25:38.970 test: (groupid=0, jobs=1): err= 0: pid=106840: Wed Nov 27 06:27:58 2024 00:25:38.970 read: IOPS=9814, BW=38.3MiB/s (40.2MB/s)(76.9MiB/2006msec) 00:25:38.970 slat (nsec): min=1900, max=253658, avg=2149.62, stdev=2301.38 00:25:38.970 clat (usec): min=3324, max=15930, avg=6811.73, stdev=996.09 00:25:38.970 lat (usec): min=3326, max=15932, avg=6813.87, stdev=996.41 00:25:38.970 clat percentiles (usec): 00:25:38.970 | 1.00th=[ 5669], 5.00th=[ 5997], 10.00th=[ 6194], 20.00th=[ 6325], 00:25:38.970 | 30.00th=[ 6456], 40.00th=[ 6587], 50.00th=[ 6652], 60.00th=[ 6783], 00:25:38.970 | 70.00th=[ 6915], 80.00th=[ 7046], 90.00th=[ 7242], 95.00th=[ 7570], 00:25:38.970 | 99.00th=[12387], 99.50th=[12911], 99.90th=[13566], 99.95th=[14091], 00:25:38.970 | 99.99th=[15926] 00:25:38.970 bw ( KiB/s): min=36064, max=40752, per=99.97%, avg=39246.00, stdev=2152.09, samples=4 00:25:38.970 iops : min= 9016, max=10188, avg=9811.50, stdev=538.02, samples=4 00:25:38.970 write: IOPS=9830, BW=38.4MiB/s (40.3MB/s)(77.0MiB/2006msec); 0 zone resets 00:25:38.970 slat (nsec): min=1936, max=187366, avg=2223.86, stdev=1973.74 00:25:38.970 clat (usec): min=2077, max=13051, avg=6166.80, stdev=931.10 00:25:38.970 lat (usec): min=2080, max=13055, avg=6169.02, stdev=931.45 00:25:38.970 clat percentiles (usec): 00:25:38.970 | 1.00th=[ 5145], 5.00th=[ 5473], 10.00th=[ 5604], 20.00th=[ 5735], 00:25:38.970 | 30.00th=[ 5866], 40.00th=[ 5932], 50.00th=[ 6063], 60.00th=[ 6128], 00:25:38.970 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6521], 95.00th=[ 6783], 00:25:38.970 | 99.00th=[11469], 99.50th=[11863], 99.90th=[12256], 99.95th=[12387], 00:25:38.970 | 99.99th=[12649] 00:25:38.970 bw ( KiB/s): min=36888, max=40552, per=99.98%, avg=39312.00, stdev=1644.83, samples=4 00:25:38.970 iops : min= 9222, max=10138, avg=9828.00, stdev=411.21, samples=4 00:25:38.970 lat (msec) : 4=0.22%, 10=97.32%, 20=2.47% 00:25:38.970 cpu : usr=77.51%, sys=17.66%, ctx=79, majf=0, minf=4 00:25:38.970 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:25:38.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.970 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:38.970 issued rwts: total=19687,19719,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.970 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:38.970 00:25:38.970 Run status group 0 (all jobs): 00:25:38.970 READ: bw=38.3MiB/s (40.2MB/s), 38.3MiB/s-38.3MiB/s (40.2MB/s-40.2MB/s), io=76.9MiB (80.6MB), run=2006-2006msec 00:25:38.970 WRITE: bw=38.4MiB/s (40.3MB/s), 38.4MiB/s-38.4MiB/s (40.3MB/s-40.3MB/s), io=77.0MiB (80.8MB), run=2006-2006msec 00:25:38.970 06:27:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:25:38.970 06:27:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:25:38.970 06:27:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:38.970 06:27:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:38.970 06:27:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:38.970 06:27:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:38.970 06:27:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:38.970 06:27:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:38.970 06:27:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:38.970 06:27:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:38.970 06:27:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:38.970 06:27:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:38.970 06:27:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:38.970 06:27:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:38.970 06:27:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:38.970 06:27:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:38.970 06:27:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:38.970 06:27:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:38.970 06:27:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:38.970 06:27:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:38.970 06:27:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:25:38.970 06:27:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:25:38.970 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:38.970 fio-3.35 00:25:38.970 Starting 1 thread 00:25:41.496 00:25:41.496 test: (groupid=0, jobs=1): err= 0: pid=106888: Wed Nov 27 06:28:00 2024 00:25:41.496 read: IOPS=9005, BW=141MiB/s (148MB/s)(282MiB/2007msec) 00:25:41.496 slat (usec): min=3, max=107, avg= 3.39, stdev= 1.57 00:25:41.496 clat (msec): min=2, max=255, avg= 8.46, stdev=14.83 00:25:41.496 lat (msec): min=2, max=255, avg= 8.46, stdev=14.83 00:25:41.496 clat percentiles (msec): 00:25:41.496 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 6], 00:25:41.496 | 30.00th=[ 7], 40.00th=[ 8], 50.00th=[ 8], 60.00th=[ 8], 00:25:41.496 | 70.00th=[ 9], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 11], 00:25:41.496 | 99.00th=[ 13], 99.50th=[ 14], 99.90th=[ 255], 99.95th=[ 255], 00:25:41.496 | 99.99th=[ 255] 00:25:41.496 bw ( KiB/s): min=44512, max=80864, per=49.46%, avg=71272.00, stdev=17848.86, samples=4 00:25:41.496 iops : min= 2782, max= 5054, avg=4454.50, stdev=1115.55, samples=4 00:25:41.496 write: IOPS=5224, BW=81.6MiB/s (85.6MB/s)(145MiB/1775msec); 0 zone resets 00:25:41.496 slat (usec): min=36, max=317, avg=37.85, stdev= 5.91 00:25:41.496 clat (msec): min=2, max=256, avg=10.32, stdev=20.27 00:25:41.496 lat (msec): min=2, max=256, avg=10.36, stdev=20.27 00:25:41.496 clat percentiles (msec): 00:25:41.496 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 8], 00:25:41.496 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:25:41.496 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 11], 95.00th=[ 11], 00:25:41.496 | 99.00th=[ 13], 99.50th=[ 255], 99.90th=[ 257], 99.95th=[ 257], 00:25:41.496 | 99.99th=[ 257] 00:25:41.496 bw ( KiB/s): min=46080, max=84832, per=88.75%, avg=74192.00, stdev=18767.18, samples=4 00:25:41.496 iops : min= 2880, max= 5302, avg=4637.00, stdev=1172.95, samples=4 00:25:41.496 lat (msec) : 4=0.98%, 10=89.30%, 20=9.25%, 500=0.47% 00:25:41.496 cpu : usr=81.06%, sys=12.36%, ctx=7, majf=0, minf=26 00:25:41.496 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:41.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:41.496 issued rwts: total=18075,9274,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.496 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:41.496 00:25:41.496 Run status group 0 (all jobs): 00:25:41.496 READ: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s), io=282MiB (296MB), run=2007-2007msec 00:25:41.496 WRITE: bw=81.6MiB/s (85.6MB/s), 81.6MiB/s-81.6MiB/s (85.6MB/s-85.6MB/s), io=145MiB (152MB), run=1775-1775msec 00:25:41.496 06:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:41.496 06:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:25:41.496 06:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:25:41.496 06:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:25:41.496 06:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:25:41.496 06:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:25:41.496 06:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:41.496 06:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:41.496 06:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:25:41.496 06:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:25:41.496 06:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:25:41.496 06:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.3 00:25:41.754 Nvme0n1 00:25:41.754 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:25:42.013 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=cf8fa037-12e2-4c91-bce0-c024b55df1f8 00:25:42.013 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb cf8fa037-12e2-4c91-bce0-c024b55df1f8 00:25:42.013 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=cf8fa037-12e2-4c91-bce0-c024b55df1f8 00:25:42.013 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:25:42.013 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:25:42.013 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:25:42.013 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:42.013 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:25:42.013 { 00:25:42.013 "base_bdev": "Nvme0n1", 00:25:42.013 "block_size": 4096, 00:25:42.013 "cluster_size": 1073741824, 00:25:42.013 "free_clusters": 4, 00:25:42.013 "name": "lvs_0", 00:25:42.013 "total_data_clusters": 4, 00:25:42.013 "uuid": "cf8fa037-12e2-4c91-bce0-c024b55df1f8" 00:25:42.013 } 00:25:42.013 ]' 00:25:42.013 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="cf8fa037-12e2-4c91-bce0-c024b55df1f8") .free_clusters' 00:25:42.013 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=4 00:25:42.013 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="cf8fa037-12e2-4c91-bce0-c024b55df1f8") .cluster_size' 00:25:42.271 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:25:42.271 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=4096 00:25:42.271 4096 00:25:42.271 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 4096 00:25:42.271 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:25:42.271 053025e4-bbb2-4776-bcc3-4a300241de6b 00:25:42.271 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:25:42.529 06:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:25:42.786 06:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:25:43.045 06:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:25:43.045 06:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:25:43.045 06:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:43.045 06:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:43.045 06:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:43.045 06:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:43.045 06:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:43.045 06:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:43.045 06:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:43.045 06:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:43.045 06:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:43.045 06:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:43.045 06:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:43.045 06:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:43.045 06:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:43.045 06:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:43.045 06:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:43.045 06:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:43.045 06:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:43.045 06:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:43.045 06:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:25:43.045 06:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:25:43.045 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:43.045 fio-3.35 00:25:43.045 Starting 1 thread 00:25:45.600 00:25:45.600 test: (groupid=0, jobs=1): err= 0: pid=107034: Wed Nov 27 06:28:04 2024 00:25:45.600 read: IOPS=9063, BW=35.4MiB/s (37.1MB/s)(71.0MiB/2006msec) 00:25:45.600 slat (nsec): min=1898, max=284353, avg=2078.45, stdev=2740.49 00:25:45.600 clat (usec): min=2779, max=17675, avg=7387.90, stdev=832.89 00:25:45.600 lat (usec): min=2789, max=17677, avg=7389.98, stdev=832.74 00:25:45.600 clat percentiles (usec): 00:25:45.600 | 1.00th=[ 5932], 5.00th=[ 6390], 10.00th=[ 6587], 20.00th=[ 6783], 00:25:45.600 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7308], 60.00th=[ 7439], 00:25:45.600 | 70.00th=[ 7635], 80.00th=[ 7898], 90.00th=[ 8225], 95.00th=[ 8717], 00:25:45.600 | 99.00th=[10028], 99.50th=[10814], 99.90th=[15270], 99.95th=[16581], 00:25:45.600 | 99.99th=[17695] 00:25:45.600 bw ( KiB/s): min=34824, max=37432, per=99.88%, avg=36208.00, stdev=1073.81, samples=4 00:25:45.600 iops : min= 8706, max= 9358, avg=9052.00, stdev=268.45, samples=4 00:25:45.600 write: IOPS=9072, BW=35.4MiB/s (37.2MB/s)(71.1MiB/2006msec); 0 zone resets 00:25:45.600 slat (nsec): min=1930, max=253307, avg=2139.19, stdev=2010.46 00:25:45.600 clat (usec): min=2230, max=16870, avg=6656.11, stdev=792.49 00:25:45.600 lat (usec): min=2243, max=16872, avg=6658.25, stdev=792.41 00:25:45.600 clat percentiles (usec): 00:25:45.600 | 1.00th=[ 5342], 5.00th=[ 5735], 10.00th=[ 5866], 20.00th=[ 6128], 00:25:45.600 | 30.00th=[ 6259], 40.00th=[ 6456], 50.00th=[ 6587], 60.00th=[ 6718], 00:25:45.600 | 70.00th=[ 6915], 80.00th=[ 7111], 90.00th=[ 7439], 95.00th=[ 7832], 00:25:45.600 | 99.00th=[ 9110], 99.50th=[ 9896], 99.90th=[14746], 99.95th=[15270], 00:25:45.600 | 99.99th=[16188] 00:25:45.600 bw ( KiB/s): min=34816, max=37696, per=100.00%, avg=36298.00, stdev=1328.83, samples=4 00:25:45.600 iops : min= 8704, max= 9424, avg=9074.50, stdev=332.21, samples=4 00:25:45.600 lat (msec) : 4=0.20%, 10=99.06%, 20=0.74% 00:25:45.600 cpu : usr=79.00%, sys=16.86%, ctx=6, majf=0, minf=28 00:25:45.600 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:25:45.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:45.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:45.600 issued rwts: total=18181,18199,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:45.600 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:45.600 00:25:45.600 Run status group 0 (all jobs): 00:25:45.600 READ: bw=35.4MiB/s (37.1MB/s), 35.4MiB/s-35.4MiB/s (37.1MB/s-37.1MB/s), io=71.0MiB (74.5MB), run=2006-2006msec 00:25:45.600 WRITE: bw=35.4MiB/s (37.2MB/s), 35.4MiB/s-35.4MiB/s (37.2MB/s-37.2MB/s), io=71.1MiB (74.5MB), run=2006-2006msec 00:25:45.600 06:28:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:45.600 06:28:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:25:45.859 06:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=fa5f34d9-90bf-4e77-a2bd-8efa44748e88 00:25:45.859 06:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb fa5f34d9-90bf-4e77-a2bd-8efa44748e88 00:25:45.859 06:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=fa5f34d9-90bf-4e77-a2bd-8efa44748e88 00:25:45.859 06:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:25:45.859 06:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:25:45.859 06:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:25:45.859 06:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:46.117 06:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:25:46.117 { 00:25:46.117 "base_bdev": "Nvme0n1", 00:25:46.117 "block_size": 4096, 00:25:46.117 "cluster_size": 1073741824, 00:25:46.117 "free_clusters": 0, 00:25:46.117 "name": "lvs_0", 00:25:46.117 "total_data_clusters": 4, 00:25:46.117 "uuid": "cf8fa037-12e2-4c91-bce0-c024b55df1f8" 00:25:46.117 }, 00:25:46.117 { 00:25:46.117 "base_bdev": "053025e4-bbb2-4776-bcc3-4a300241de6b", 00:25:46.117 "block_size": 4096, 00:25:46.117 "cluster_size": 4194304, 00:25:46.117 "free_clusters": 1022, 00:25:46.117 "name": "lvs_n_0", 00:25:46.117 "total_data_clusters": 1022, 00:25:46.117 "uuid": "fa5f34d9-90bf-4e77-a2bd-8efa44748e88" 00:25:46.117 } 00:25:46.117 ]' 00:25:46.117 06:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="fa5f34d9-90bf-4e77-a2bd-8efa44748e88") .free_clusters' 00:25:46.117 06:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=1022 00:25:46.117 06:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="fa5f34d9-90bf-4e77-a2bd-8efa44748e88") .cluster_size' 00:25:46.117 06:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:25:46.117 06:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=4088 00:25:46.117 4088 00:25:46.117 06:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 4088 00:25:46.117 06:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:25:46.375 a0a74fbe-5cd0-4721-a290-f08c7d7eaf7e 00:25:46.376 06:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:25:46.376 06:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:25:46.634 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:25:46.893 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:25:46.893 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:25:46.893 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:46.893 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:46.893 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:46.893 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:46.893 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:46.893 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:46.893 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:46.893 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:46.893 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:46.893 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:46.893 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:46.893 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:46.893 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:46.893 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:46.893 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:46.893 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:46.893 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:46.893 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:46.893 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:25:46.893 06:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:25:47.152 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:47.152 fio-3.35 00:25:47.152 Starting 1 thread 00:25:49.681 00:25:49.681 test: (groupid=0, jobs=1): err= 0: pid=107156: Wed Nov 27 06:28:08 2024 00:25:49.681 read: IOPS=8057, BW=31.5MiB/s (33.0MB/s)(63.2MiB/2007msec) 00:25:49.681 slat (nsec): min=1911, max=155767, avg=2095.69, stdev=1565.58 00:25:49.681 clat (usec): min=2334, max=14006, avg=8336.63, stdev=719.81 00:25:49.681 lat (usec): min=2338, max=14008, avg=8338.72, stdev=719.73 00:25:49.681 clat percentiles (usec): 00:25:49.681 | 1.00th=[ 6849], 5.00th=[ 7242], 10.00th=[ 7504], 20.00th=[ 7767], 00:25:49.681 | 30.00th=[ 7963], 40.00th=[ 8160], 50.00th=[ 8291], 60.00th=[ 8455], 00:25:49.681 | 70.00th=[ 8717], 80.00th=[ 8848], 90.00th=[ 9241], 95.00th=[ 9503], 00:25:49.681 | 99.00th=[10028], 99.50th=[10159], 99.90th=[12780], 99.95th=[13435], 00:25:49.681 | 99.99th=[13960] 00:25:49.681 bw ( KiB/s): min=31032, max=33184, per=100.00%, avg=32228.00, stdev=890.14, samples=4 00:25:49.681 iops : min= 7758, max= 8296, avg=8057.00, stdev=222.53, samples=4 00:25:49.681 write: IOPS=8040, BW=31.4MiB/s (32.9MB/s)(63.0MiB/2007msec); 0 zone resets 00:25:49.681 slat (nsec): min=1951, max=109364, avg=2168.94, stdev=1009.41 00:25:49.681 clat (usec): min=1523, max=13750, avg=7487.36, stdev=696.41 00:25:49.681 lat (usec): min=1529, max=13752, avg=7489.53, stdev=696.36 00:25:49.681 clat percentiles (usec): 00:25:49.681 | 1.00th=[ 6063], 5.00th=[ 6456], 10.00th=[ 6718], 20.00th=[ 6915], 00:25:49.681 | 30.00th=[ 7111], 40.00th=[ 7308], 50.00th=[ 7439], 60.00th=[ 7635], 00:25:49.681 | 70.00th=[ 7832], 80.00th=[ 8029], 90.00th=[ 8291], 95.00th=[ 8586], 00:25:49.681 | 99.00th=[ 9110], 99.50th=[ 9372], 99.90th=[12649], 99.95th=[13173], 00:25:49.681 | 99.99th=[13698] 00:25:49.681 bw ( KiB/s): min=31824, max=32704, per=99.91%, avg=32134.00, stdev=399.22, samples=4 00:25:49.681 iops : min= 7958, max= 8176, avg=8033.50, stdev=99.34, samples=4 00:25:49.681 lat (msec) : 2=0.01%, 4=0.12%, 10=99.25%, 20=0.61% 00:25:49.681 cpu : usr=79.41%, sys=16.80%, ctx=4, majf=0, minf=28 00:25:49.681 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:25:49.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:49.681 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:49.681 issued rwts: total=16171,16137,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:49.681 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:49.681 00:25:49.681 Run status group 0 (all jobs): 00:25:49.681 READ: bw=31.5MiB/s (33.0MB/s), 31.5MiB/s-31.5MiB/s (33.0MB/s-33.0MB/s), io=63.2MiB (66.2MB), run=2007-2007msec 00:25:49.681 WRITE: bw=31.4MiB/s (32.9MB/s), 31.4MiB/s-31.4MiB/s (32.9MB/s-32.9MB/s), io=63.0MiB (66.1MB), run=2007-2007msec 00:25:49.681 06:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:49.681 06:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:25:49.681 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:25:49.938 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:25:49.938 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:25:50.196 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:25:50.456 06:28:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:25:53.739 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:53.739 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:53.739 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:53.739 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:53.739 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:25:53.739 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:53.739 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:25:53.739 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:53.739 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:53.739 rmmod nvme_tcp 00:25:53.739 rmmod nvme_fabrics 00:25:53.739 rmmod nvme_keyring 00:25:53.739 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:53.739 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:25:53.739 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:25:53.739 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 106714 ']' 00:25:53.739 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 106714 00:25:53.739 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 106714 ']' 00:25:53.739 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 106714 00:25:53.739 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:25:53.739 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:53.739 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106714 00:25:53.739 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:53.739 killing process with pid 106714 00:25:53.739 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:53.739 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106714' 00:25:53.739 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 106714 00:25:53.739 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 106714 00:25:53.997 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:53.997 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:53.997 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:53.997 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:25:53.997 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:25:53.997 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:53.997 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:53.997 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:53.997 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:53.997 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:53.997 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:53.997 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:53.997 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:53.997 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:53.997 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:53.997 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:53.997 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:53.997 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:53.997 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:53.997 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:53.997 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:25:54.256 00:25:54.256 real 0m20.648s 00:25:54.256 user 1m27.930s 00:25:54.256 sys 0m3.300s 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.256 ************************************ 00:25:54.256 END TEST nvmf_fio_host 00:25:54.256 ************************************ 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.256 ************************************ 00:25:54.256 START TEST nvmf_failover 00:25:54.256 ************************************ 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:54.256 * Looking for test storage... 00:25:54.256 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:54.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:54.256 --rc genhtml_branch_coverage=1 00:25:54.256 --rc genhtml_function_coverage=1 00:25:54.256 --rc genhtml_legend=1 00:25:54.256 --rc geninfo_all_blocks=1 00:25:54.256 --rc geninfo_unexecuted_blocks=1 00:25:54.256 00:25:54.256 ' 00:25:54.256 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:54.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:54.256 --rc genhtml_branch_coverage=1 00:25:54.256 --rc genhtml_function_coverage=1 00:25:54.256 --rc genhtml_legend=1 00:25:54.257 --rc geninfo_all_blocks=1 00:25:54.257 --rc geninfo_unexecuted_blocks=1 00:25:54.257 00:25:54.257 ' 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:54.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:54.257 --rc genhtml_branch_coverage=1 00:25:54.257 --rc genhtml_function_coverage=1 00:25:54.257 --rc genhtml_legend=1 00:25:54.257 --rc geninfo_all_blocks=1 00:25:54.257 --rc geninfo_unexecuted_blocks=1 00:25:54.257 00:25:54.257 ' 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:54.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:54.257 --rc genhtml_branch_coverage=1 00:25:54.257 --rc genhtml_function_coverage=1 00:25:54.257 --rc genhtml_legend=1 00:25:54.257 --rc geninfo_all_blocks=1 00:25:54.257 --rc geninfo_unexecuted_blocks=1 00:25:54.257 00:25:54.257 ' 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:54.257 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:54.257 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:54.258 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:54.258 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:54.258 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:54.258 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:54.258 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:54.258 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:54.258 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:54.258 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:54.258 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:54.258 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:54.258 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:54.258 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:54.258 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:54.258 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:54.258 Cannot find device "nvmf_init_br" 00:25:54.258 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:25:54.258 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:54.258 Cannot find device "nvmf_init_br2" 00:25:54.258 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:25:54.258 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:54.516 Cannot find device "nvmf_tgt_br" 00:25:54.516 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:25:54.516 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:54.516 Cannot find device "nvmf_tgt_br2" 00:25:54.516 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:25:54.516 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:54.516 Cannot find device "nvmf_init_br" 00:25:54.516 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:25:54.516 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:54.516 Cannot find device "nvmf_init_br2" 00:25:54.516 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:25:54.516 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:54.516 Cannot find device "nvmf_tgt_br" 00:25:54.516 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:25:54.516 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:54.516 Cannot find device "nvmf_tgt_br2" 00:25:54.516 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:25:54.516 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:54.516 Cannot find device "nvmf_br" 00:25:54.516 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:25:54.516 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:54.516 Cannot find device "nvmf_init_if" 00:25:54.516 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:25:54.516 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:54.516 Cannot find device "nvmf_init_if2" 00:25:54.516 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:25:54.516 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:54.516 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:54.516 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:25:54.516 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:54.516 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:54.516 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:25:54.517 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:54.517 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:54.517 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:54.517 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:54.517 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:54.517 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:54.517 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:54.517 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:54.517 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:54.517 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:54.517 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:54.517 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:54.517 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:54.517 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:54.517 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:54.517 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:54.517 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:54.517 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:54.517 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:54.517 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:54.517 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:54.517 06:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:54.517 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:54.517 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:54.517 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:54.517 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:54.517 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:54.517 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:54.517 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:54.517 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:54.517 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:54.517 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:54.776 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:54.776 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:54.776 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:25:54.776 00:25:54.776 --- 10.0.0.3 ping statistics --- 00:25:54.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.776 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:25:54.776 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:54.776 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:54.776 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:25:54.776 00:25:54.776 --- 10.0.0.4 ping statistics --- 00:25:54.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.776 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:25:54.776 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:54.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:54.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:25:54.776 00:25:54.776 --- 10.0.0.1 ping statistics --- 00:25:54.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.776 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:25:54.776 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:54.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:54.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:25:54.776 00:25:54.776 --- 10.0.0.2 ping statistics --- 00:25:54.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.776 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:25:54.776 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:54.776 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:25:54.776 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:54.776 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:54.776 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:54.776 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:54.776 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:54.776 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:54.776 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:54.776 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:54.776 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:54.776 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:54.776 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:54.776 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=107480 00:25:54.776 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:54.776 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 107480 00:25:54.776 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 107480 ']' 00:25:54.776 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.776 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:54.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:54.776 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.776 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:54.776 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:54.776 [2024-11-27 06:28:14.121855] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:25:54.776 [2024-11-27 06:28:14.121933] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:54.776 [2024-11-27 06:28:14.239796] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:54.776 [2024-11-27 06:28:14.259840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:54.776 [2024-11-27 06:28:14.277660] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:54.776 [2024-11-27 06:28:14.277698] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:54.776 [2024-11-27 06:28:14.277703] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:54.776 [2024-11-27 06:28:14.277707] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:54.776 [2024-11-27 06:28:14.277710] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:54.776 [2024-11-27 06:28:14.278507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:54.776 [2024-11-27 06:28:14.278713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:54.776 [2024-11-27 06:28:14.278713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:55.712 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:55.712 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:55.712 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:55.712 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:55.712 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:55.712 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:55.712 06:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:55.712 [2024-11-27 06:28:15.179474] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:55.712 06:28:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:55.971 Malloc0 00:25:55.971 06:28:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:56.231 06:28:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:56.489 06:28:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:56.489 [2024-11-27 06:28:16.017706] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:56.748 06:28:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:25:56.749 [2024-11-27 06:28:16.221813] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:25:56.749 06:28:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:25:57.007 [2024-11-27 06:28:16.421955] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:25:57.007 06:28:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=107592 00:25:57.007 06:28:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:57.007 06:28:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 107592 /var/tmp/bdevperf.sock 00:25:57.007 06:28:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 107592 ']' 00:25:57.007 06:28:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:57.007 06:28:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:57.007 06:28:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:57.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:57.007 06:28:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:57.007 06:28:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:57.007 06:28:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:57.265 06:28:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:57.265 06:28:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:57.265 06:28:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:57.522 NVMe0n1 00:25:57.522 06:28:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:57.780 00:25:57.780 06:28:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:57.780 06:28:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=107626 00:25:57.780 06:28:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:58.823 06:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:59.081 [2024-11-27 06:28:18.466557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ae30 is same with the state(6) to be set 00:25:59.081 [2024-11-27 06:28:18.466597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ae30 is same with the state(6) to be set 00:25:59.081 [2024-11-27 06:28:18.466602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ae30 is same with the state(6) to be set 00:25:59.081 [2024-11-27 06:28:18.466606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ae30 is same with the state(6) to be set 00:25:59.081 [2024-11-27 06:28:18.466610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ae30 is same with the state(6) to be set 00:25:59.081 [2024-11-27 06:28:18.466614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ae30 is same with the state(6) to be set 00:25:59.081 [2024-11-27 06:28:18.466618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ae30 is same with the state(6) to be set 00:25:59.081 [2024-11-27 06:28:18.466622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ae30 is same with the state(6) to be set 00:25:59.081 [2024-11-27 06:28:18.466626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ae30 is same with the state(6) to be set 00:25:59.081 [2024-11-27 06:28:18.466629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ae30 is same with the state(6) to be set 00:25:59.081 [2024-11-27 06:28:18.466633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ae30 is same with the state(6) to be set 00:25:59.081 [2024-11-27 06:28:18.466637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ae30 is same with the state(6) to be set 00:25:59.081 [2024-11-27 06:28:18.466640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ae30 is same with the state(6) to be set 00:25:59.081 [2024-11-27 06:28:18.466644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ae30 is same with the state(6) to be set 00:25:59.081 [2024-11-27 06:28:18.466648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ae30 is same with the state(6) to be set 00:25:59.081 [2024-11-27 06:28:18.466651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ae30 is same with the state(6) to be set 00:25:59.081 [2024-11-27 06:28:18.466655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ae30 is same with the state(6) to be set 00:25:59.081 [2024-11-27 06:28:18.466659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ae30 is same with the state(6) to be set 00:25:59.081 [2024-11-27 06:28:18.466662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ae30 is same with the state(6) to be set 00:25:59.081 [2024-11-27 06:28:18.466666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ae30 is same with the state(6) to be set 00:25:59.081 [2024-11-27 06:28:18.466670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ae30 is same with the state(6) to be set 00:25:59.081 [2024-11-27 06:28:18.466673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ae30 is same with the state(6) to be set 00:25:59.081 [2024-11-27 06:28:18.466677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ae30 is same with the state(6) to be set 00:25:59.082 [2024-11-27 06:28:18.466681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ae30 is same with the state(6) to be set 00:25:59.082 [2024-11-27 06:28:18.466684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ae30 is same with the state(6) to be set 00:25:59.082 [2024-11-27 06:28:18.466688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ae30 is same with the state(6) to be set 00:25:59.082 [2024-11-27 06:28:18.466694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ae30 is same with the state(6) to be set 00:25:59.082 [2024-11-27 06:28:18.466698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ae30 is same with the state(6) to be set 00:25:59.082 [2024-11-27 06:28:18.466702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ae30 is same with the state(6) to be set 00:25:59.082 [2024-11-27 06:28:18.466706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ae30 is same with the state(6) to be set 00:25:59.082 [2024-11-27 06:28:18.466710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ae30 is same with the state(6) to be set 00:25:59.082 [2024-11-27 06:28:18.466713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ae30 is same with the state(6) to be set 00:25:59.082 [2024-11-27 06:28:18.466717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ae30 is same with the state(6) to be set 00:25:59.082 [2024-11-27 06:28:18.466721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ae30 is same with the state(6) to be set 00:25:59.082 [2024-11-27 06:28:18.466724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ae30 is same with the state(6) to be set 00:25:59.082 [2024-11-27 06:28:18.466728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ae30 is same with the state(6) to be set 00:25:59.082 [2024-11-27 06:28:18.466732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ae30 is same with the state(6) to be set 00:25:59.082 [2024-11-27 06:28:18.466736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105ae30 is same with the state(6) to be set 00:25:59.082 06:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:26:02.359 06:28:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:02.359 00:26:02.359 06:28:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:26:02.617 [2024-11-27 06:28:21.970850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.617 [2024-11-27 06:28:21.970889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.617 [2024-11-27 06:28:21.970894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.617 [2024-11-27 06:28:21.970898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.617 [2024-11-27 06:28:21.970902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.617 [2024-11-27 06:28:21.970917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.617 [2024-11-27 06:28:21.970921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.617 [2024-11-27 06:28:21.970925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.617 [2024-11-27 06:28:21.970928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.617 [2024-11-27 06:28:21.970932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.617 [2024-11-27 06:28:21.970937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.617 [2024-11-27 06:28:21.970940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.617 [2024-11-27 06:28:21.970944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.617 [2024-11-27 06:28:21.970948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.617 [2024-11-27 06:28:21.970951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.617 [2024-11-27 06:28:21.970955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.617 [2024-11-27 06:28:21.970959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.617 [2024-11-27 06:28:21.970962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.617 [2024-11-27 06:28:21.970966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.617 [2024-11-27 06:28:21.970970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.617 [2024-11-27 06:28:21.970973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.617 [2024-11-27 06:28:21.970977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.617 [2024-11-27 06:28:21.970982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.618 [2024-11-27 06:28:21.970985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.618 [2024-11-27 06:28:21.970989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.618 [2024-11-27 06:28:21.970992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.618 [2024-11-27 06:28:21.970996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.618 [2024-11-27 06:28:21.970999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.618 [2024-11-27 06:28:21.971003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.618 [2024-11-27 06:28:21.971007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.618 [2024-11-27 06:28:21.971011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.618 [2024-11-27 06:28:21.971014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.618 [2024-11-27 06:28:21.971018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.618 [2024-11-27 06:28:21.971022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.618 [2024-11-27 06:28:21.971026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.618 [2024-11-27 06:28:21.971031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.618 [2024-11-27 06:28:21.971034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.618 [2024-11-27 06:28:21.971038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.618 [2024-11-27 06:28:21.971042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.618 [2024-11-27 06:28:21.971045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.618 [2024-11-27 06:28:21.971049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.618 [2024-11-27 06:28:21.971052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.618 [2024-11-27 06:28:21.971056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.618 [2024-11-27 06:28:21.971059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.618 [2024-11-27 06:28:21.971063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.618 [2024-11-27 06:28:21.971067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.618 [2024-11-27 06:28:21.971070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.618 [2024-11-27 06:28:21.971074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.618 [2024-11-27 06:28:21.971077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.618 [2024-11-27 06:28:21.971080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.618 [2024-11-27 06:28:21.971084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.618 [2024-11-27 06:28:21.971088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.618 [2024-11-27 06:28:21.971091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.618 [2024-11-27 06:28:21.971094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.618 [2024-11-27 06:28:21.971098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.618 [2024-11-27 06:28:21.971102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.618 [2024-11-27 06:28:21.971105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.618 [2024-11-27 06:28:21.971109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.618 [2024-11-27 06:28:21.971113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105bc50 is same with the state(6) to be set 00:26:02.618 06:28:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:26:05.897 06:28:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:05.897 [2024-11-27 06:28:25.184918] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:05.897 06:28:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:26:06.831 06:28:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:26:07.088 06:28:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 107626 00:26:13.655 { 00:26:13.655 "results": [ 00:26:13.655 { 00:26:13.655 "job": "NVMe0n1", 00:26:13.655 "core_mask": "0x1", 00:26:13.655 "workload": "verify", 00:26:13.655 "status": "finished", 00:26:13.655 "verify_range": { 00:26:13.655 "start": 0, 00:26:13.655 "length": 16384 00:26:13.655 }, 00:26:13.655 "queue_depth": 128, 00:26:13.655 "io_size": 4096, 00:26:13.655 "runtime": 15.012181, 00:26:13.655 "iops": 12375.68345332367, 00:26:13.655 "mibps": 48.34251348954559, 00:26:13.655 "io_failed": 4597, 00:26:13.655 "io_timeout": 0, 00:26:13.655 "avg_latency_us": 10071.728218655591, 00:26:13.655 "min_latency_us": 384.39384615384614, 00:26:13.655 "max_latency_us": 18450.904615384614 00:26:13.655 } 00:26:13.655 ], 00:26:13.655 "core_count": 1 00:26:13.655 } 00:26:13.655 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 107592 00:26:13.655 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 107592 ']' 00:26:13.655 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 107592 00:26:13.655 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:13.655 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:13.655 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107592 00:26:13.655 killing process with pid 107592 00:26:13.655 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:13.655 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:13.655 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107592' 00:26:13.655 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 107592 00:26:13.655 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 107592 00:26:13.655 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:13.655 [2024-11-27 06:28:16.477879] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:26:13.655 [2024-11-27 06:28:16.477978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107592 ] 00:26:13.655 [2024-11-27 06:28:16.591708] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:13.655 [2024-11-27 06:28:16.618927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.655 [2024-11-27 06:28:16.638288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.655 Running I/O for 15 seconds... 00:26:13.655 12588.00 IOPS, 49.17 MiB/s [2024-11-27T06:28:33.185Z] [2024-11-27 06:28:18.467928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:112440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.655 [2024-11-27 06:28:18.467970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.655 [2024-11-27 06:28:18.467987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:112448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.655 [2024-11-27 06:28:18.467998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.655 [2024-11-27 06:28:18.468009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:112456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.655 [2024-11-27 06:28:18.468019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.655 [2024-11-27 06:28:18.468030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:112464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.655 [2024-11-27 06:28:18.468038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.655 [2024-11-27 06:28:18.468049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:112472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.655 [2024-11-27 06:28:18.468058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.655 [2024-11-27 06:28:18.468068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:112480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.655 [2024-11-27 06:28:18.468077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.655 [2024-11-27 06:28:18.468087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:112488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.655 [2024-11-27 06:28:18.468095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.655 [2024-11-27 06:28:18.468106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:111928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.655 [2024-11-27 06:28:18.468115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.655 [2024-11-27 06:28:18.468125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:111936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.655 [2024-11-27 06:28:18.468134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.655 [2024-11-27 06:28:18.468144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:111944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.655 [2024-11-27 06:28:18.468153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.655 [2024-11-27 06:28:18.468163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:111952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.655 [2024-11-27 06:28:18.468198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.655 [2024-11-27 06:28:18.468209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:111960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.655 [2024-11-27 06:28:18.468217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.655 [2024-11-27 06:28:18.468229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:111968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.655 [2024-11-27 06:28:18.468237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.655 [2024-11-27 06:28:18.468248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:111976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.655 [2024-11-27 06:28:18.468256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.655 [2024-11-27 06:28:18.468266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:111984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.655 [2024-11-27 06:28:18.468275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.655 [2024-11-27 06:28:18.468286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:111992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.655 [2024-11-27 06:28:18.468295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.655 [2024-11-27 06:28:18.468309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.655 [2024-11-27 06:28:18.468318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.655 [2024-11-27 06:28:18.468328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:112008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.655 [2024-11-27 06:28:18.468337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.655 [2024-11-27 06:28:18.468347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:112016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.655 [2024-11-27 06:28:18.468356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.655 [2024-11-27 06:28:18.468366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:112024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.655 [2024-11-27 06:28:18.468375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.655 [2024-11-27 06:28:18.468386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:112032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.655 [2024-11-27 06:28:18.468395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.655 [2024-11-27 06:28:18.468405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:112040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.655 [2024-11-27 06:28:18.468413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.655 [2024-11-27 06:28:18.468424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:112048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.655 [2024-11-27 06:28:18.468433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.655 [2024-11-27 06:28:18.468451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:112056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.655 [2024-11-27 06:28:18.468461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.655 [2024-11-27 06:28:18.468471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:112064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.655 [2024-11-27 06:28:18.468479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.655 [2024-11-27 06:28:18.468490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:112072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.655 [2024-11-27 06:28:18.468498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.655 [2024-11-27 06:28:18.468509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.655 [2024-11-27 06:28:18.468517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.655 [2024-11-27 06:28:18.468528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:112088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.655 [2024-11-27 06:28:18.468536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.655 [2024-11-27 06:28:18.468554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:112096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.655 [2024-11-27 06:28:18.468563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.655 [2024-11-27 06:28:18.468574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.655 [2024-11-27 06:28:18.468582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.655 [2024-11-27 06:28:18.468593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:112112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.655 [2024-11-27 06:28:18.468601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.655 [2024-11-27 06:28:18.468612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:112120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.655 [2024-11-27 06:28:18.468621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.655 [2024-11-27 06:28:18.468633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:112128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.655 [2024-11-27 06:28:18.468642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.468652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.656 [2024-11-27 06:28:18.468661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.468671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.656 [2024-11-27 06:28:18.468680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.468690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:112152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.656 [2024-11-27 06:28:18.468703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.468714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:112160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.656 [2024-11-27 06:28:18.468723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.468733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:112168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.656 [2024-11-27 06:28:18.468742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.468753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:112176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.656 [2024-11-27 06:28:18.468761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.468772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:112184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.656 [2024-11-27 06:28:18.468780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.468791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:112192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.656 [2024-11-27 06:28:18.468799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.468811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:112200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.656 [2024-11-27 06:28:18.468819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.468829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:112208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.656 [2024-11-27 06:28:18.468838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.468848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:112216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.656 [2024-11-27 06:28:18.468857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.468868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:112224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.656 [2024-11-27 06:28:18.468876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.468886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:112232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.656 [2024-11-27 06:28:18.468895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.468915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:112240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.656 [2024-11-27 06:28:18.468924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.468935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:112496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.656 [2024-11-27 06:28:18.468944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.468960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:112504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.656 [2024-11-27 06:28:18.468969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.468979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.656 [2024-11-27 06:28:18.468988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.468998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:112520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.656 [2024-11-27 06:28:18.469007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.469017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:112528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.656 [2024-11-27 06:28:18.469026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.469036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:112536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.656 [2024-11-27 06:28:18.469045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.469055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:112544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.656 [2024-11-27 06:28:18.469063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.469073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:112552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.656 [2024-11-27 06:28:18.469082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.469092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:112560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.656 [2024-11-27 06:28:18.469101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.469111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:112568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.656 [2024-11-27 06:28:18.469119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.469130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:112576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.656 [2024-11-27 06:28:18.469138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.469149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:112584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.656 [2024-11-27 06:28:18.469157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.469167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:112592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.656 [2024-11-27 06:28:18.469176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.469189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:112600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.656 [2024-11-27 06:28:18.469202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.469212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:112608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.656 [2024-11-27 06:28:18.469221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.469231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:112616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.656 [2024-11-27 06:28:18.469240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.469250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:112624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.656 [2024-11-27 06:28:18.469259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.469270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:112632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.656 [2024-11-27 06:28:18.469278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.469289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.656 [2024-11-27 06:28:18.469298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.469308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.656 [2024-11-27 06:28:18.469317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.469327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:112264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.656 [2024-11-27 06:28:18.469336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.469346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.656 [2024-11-27 06:28:18.469355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.469365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:112280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.656 [2024-11-27 06:28:18.469374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.469384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:112288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.656 [2024-11-27 06:28:18.469393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.469404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:112296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.656 [2024-11-27 06:28:18.469412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.469423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:112304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.656 [2024-11-27 06:28:18.469431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.469445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:112312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.656 [2024-11-27 06:28:18.469454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.469465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:112320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.656 [2024-11-27 06:28:18.469474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.469484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:112328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.656 [2024-11-27 06:28:18.469493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.469505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:112336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.656 [2024-11-27 06:28:18.469513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.469523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:112344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.656 [2024-11-27 06:28:18.469532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.469542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.656 [2024-11-27 06:28:18.469551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.469561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:112360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.656 [2024-11-27 06:28:18.469570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.469581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.656 [2024-11-27 06:28:18.469590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.469601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.656 [2024-11-27 06:28:18.469609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.469620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:112384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.656 [2024-11-27 06:28:18.469628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.469639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.656 [2024-11-27 06:28:18.469647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.469658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:112400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.656 [2024-11-27 06:28:18.469666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.469677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:112408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.656 [2024-11-27 06:28:18.469689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.469700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.656 [2024-11-27 06:28:18.469709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.656 [2024-11-27 06:28:18.469720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.657 [2024-11-27 06:28:18.469728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.469739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:112432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.657 [2024-11-27 06:28:18.469747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.469757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:112640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.657 [2024-11-27 06:28:18.469766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.469776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.657 [2024-11-27 06:28:18.469784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.469795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:112656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.657 [2024-11-27 06:28:18.469803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.469816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:112664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.657 [2024-11-27 06:28:18.469824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.469836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:112672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.657 [2024-11-27 06:28:18.469844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.469855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:112680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.657 [2024-11-27 06:28:18.469863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.469873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.657 [2024-11-27 06:28:18.469882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.469894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.657 [2024-11-27 06:28:18.469911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.469921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:112704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.657 [2024-11-27 06:28:18.469930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.469940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:112712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.657 [2024-11-27 06:28:18.469953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.469963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.657 [2024-11-27 06:28:18.469972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.469982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:112728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.657 [2024-11-27 06:28:18.469990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.470001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.657 [2024-11-27 06:28:18.470009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.470019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.657 [2024-11-27 06:28:18.470028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.470038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.657 [2024-11-27 06:28:18.470047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.470057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.657 [2024-11-27 06:28:18.470065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.470075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.657 [2024-11-27 06:28:18.470084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.470094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:112776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.657 [2024-11-27 06:28:18.470103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.470113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.657 [2024-11-27 06:28:18.470121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.470133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.657 [2024-11-27 06:28:18.470142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.470152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.657 [2024-11-27 06:28:18.470160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.470170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.657 [2024-11-27 06:28:18.470179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.470194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.657 [2024-11-27 06:28:18.470202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.470214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.657 [2024-11-27 06:28:18.470222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.470233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:112832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.657 [2024-11-27 06:28:18.470241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.470252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.657 [2024-11-27 06:28:18.470260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.470270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.657 [2024-11-27 06:28:18.470279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.470289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.657 [2024-11-27 06:28:18.470298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.470308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:112864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.657 [2024-11-27 06:28:18.470317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.470327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.657 [2024-11-27 06:28:18.470335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.470346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.657 [2024-11-27 06:28:18.470354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.470364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:112888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.657 [2024-11-27 06:28:18.470373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.470395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:13.657 [2024-11-27 06:28:18.470403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112896 len:8 PRP1 0x0 PRP2 0x0 00:26:13.657 [2024-11-27 06:28:18.470412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.470423] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:13.657 [2024-11-27 06:28:18.470429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:13.657 [2024-11-27 06:28:18.470435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112904 len:8 PRP1 0x0 PRP2 0x0 00:26:13.657 [2024-11-27 06:28:18.470449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.470459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:13.657 [2024-11-27 06:28:18.470465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:13.657 [2024-11-27 06:28:18.470471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112912 len:8 PRP1 0x0 PRP2 0x0 00:26:13.657 [2024-11-27 06:28:18.470480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.470489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:13.657 [2024-11-27 06:28:18.470495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:13.657 [2024-11-27 06:28:18.470501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112920 len:8 PRP1 0x0 PRP2 0x0 00:26:13.657 [2024-11-27 06:28:18.470510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.470520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:13.657 [2024-11-27 06:28:18.470526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:13.657 [2024-11-27 06:28:18.470532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112928 len:8 PRP1 0x0 PRP2 0x0 00:26:13.657 [2024-11-27 06:28:18.470541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.470550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:13.657 [2024-11-27 06:28:18.470555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:13.657 [2024-11-27 06:28:18.470562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112936 len:8 PRP1 0x0 PRP2 0x0 00:26:13.657 [2024-11-27 06:28:18.470570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.470579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:13.657 [2024-11-27 06:28:18.470585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:13.657 [2024-11-27 06:28:18.470591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112944 len:8 PRP1 0x0 PRP2 0x0 00:26:13.657 [2024-11-27 06:28:18.470600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.470637] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:26:13.657 [2024-11-27 06:28:18.470676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:13.657 [2024-11-27 06:28:18.470687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.470697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:13.657 [2024-11-27 06:28:18.470706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.470716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:13.657 [2024-11-27 06:28:18.470725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.470734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:13.657 [2024-11-27 06:28:18.470748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.657 [2024-11-27 06:28:18.470758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:13.657 [2024-11-27 06:28:18.470797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b6750 (9): Bad file descriptor 00:26:13.657 [2024-11-27 06:28:18.474120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:13.657 [2024-11-27 06:28:18.504928] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:26:13.657 12399.50 IOPS, 48.44 MiB/s [2024-11-27T06:28:33.187Z] 12580.33 IOPS, 49.14 MiB/s [2024-11-27T06:28:33.187Z] 12658.50 IOPS, 49.45 MiB/s [2024-11-27T06:28:33.187Z] [2024-11-27 06:28:21.971215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:85680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.657 [2024-11-27 06:28:21.971253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.658 [2024-11-27 06:28:21.971275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.658 [2024-11-27 06:28:21.971291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:85704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.658 [2024-11-27 06:28:21.971306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:85712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.658 [2024-11-27 06:28:21.971321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:85720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.658 [2024-11-27 06:28:21.971335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:85728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.658 [2024-11-27 06:28:21.971350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:85736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.658 [2024-11-27 06:28:21.971364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:85744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.658 [2024-11-27 06:28:21.971378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:85752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.658 [2024-11-27 06:28:21.971393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:85976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.658 [2024-11-27 06:28:21.971428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.658 [2024-11-27 06:28:21.971443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.658 [2024-11-27 06:28:21.971458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:86000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.658 [2024-11-27 06:28:21.971472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.658 [2024-11-27 06:28:21.971486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:86016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.658 [2024-11-27 06:28:21.971500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:86024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.658 [2024-11-27 06:28:21.971518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:86032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.658 [2024-11-27 06:28:21.971533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:86040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.658 [2024-11-27 06:28:21.971548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.658 [2024-11-27 06:28:21.971562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:86056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.658 [2024-11-27 06:28:21.971576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:86064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.658 [2024-11-27 06:28:21.971591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:86072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.658 [2024-11-27 06:28:21.971609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:86080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.658 [2024-11-27 06:28:21.971624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:86088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.658 [2024-11-27 06:28:21.971639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.658 [2024-11-27 06:28:21.971653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:86104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.658 [2024-11-27 06:28:21.971667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:86112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.658 [2024-11-27 06:28:21.971682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:86120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.658 [2024-11-27 06:28:21.971696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:86128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.658 [2024-11-27 06:28:21.971710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:86136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.658 [2024-11-27 06:28:21.971724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:86144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.658 [2024-11-27 06:28:21.971738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.658 [2024-11-27 06:28:21.971754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:86160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.658 [2024-11-27 06:28:21.971768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:86168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.658 [2024-11-27 06:28:21.971783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:86176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.658 [2024-11-27 06:28:21.971800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:86184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.658 [2024-11-27 06:28:21.971815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:86192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.658 [2024-11-27 06:28:21.971829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:86200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.658 [2024-11-27 06:28:21.971843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.658 [2024-11-27 06:28:21.971857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:86216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.658 [2024-11-27 06:28:21.971872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.658 [2024-11-27 06:28:21.971887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:86232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.658 [2024-11-27 06:28:21.971901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.658 [2024-11-27 06:28:21.971919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:85760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.659 [2024-11-27 06:28:21.971925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.971933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:85768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.659 [2024-11-27 06:28:21.971940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.971948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:85776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.659 [2024-11-27 06:28:21.971955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.971963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.659 [2024-11-27 06:28:21.971969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.971978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:85792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.659 [2024-11-27 06:28:21.971985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.971997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:85800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.659 [2024-11-27 06:28:21.972005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:85808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.659 [2024-11-27 06:28:21.972020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:85816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.659 [2024-11-27 06:28:21.972035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:86240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.659 [2024-11-27 06:28:21.972049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:86248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.659 [2024-11-27 06:28:21.972063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:86256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.659 [2024-11-27 06:28:21.972078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:86264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.659 [2024-11-27 06:28:21.972093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:86272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.659 [2024-11-27 06:28:21.972108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:86280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.659 [2024-11-27 06:28:21.972124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:86288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.659 [2024-11-27 06:28:21.972139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:86296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.659 [2024-11-27 06:28:21.972154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:86304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.659 [2024-11-27 06:28:21.972168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.659 [2024-11-27 06:28:21.972187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:86320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.659 [2024-11-27 06:28:21.972203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:86328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.659 [2024-11-27 06:28:21.972218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.659 [2024-11-27 06:28:21.972233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:86344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.659 [2024-11-27 06:28:21.972250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.659 [2024-11-27 06:28:21.972265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:86360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.659 [2024-11-27 06:28:21.972281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:86368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.659 [2024-11-27 06:28:21.972296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:86376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.659 [2024-11-27 06:28:21.972311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:86384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.659 [2024-11-27 06:28:21.972326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.659 [2024-11-27 06:28:21.972340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.659 [2024-11-27 06:28:21.972355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:86408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.659 [2024-11-27 06:28:21.972370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:86416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.659 [2024-11-27 06:28:21.972387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:86424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.659 [2024-11-27 06:28:21.972402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:85824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.659 [2024-11-27 06:28:21.972416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.659 [2024-11-27 06:28:21.972431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:85840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.659 [2024-11-27 06:28:21.972445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.659 [2024-11-27 06:28:21.972463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:85856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.659 [2024-11-27 06:28:21.972478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.659 [2024-11-27 06:28:21.972494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:85872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.659 [2024-11-27 06:28:21.972509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:85880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.659 [2024-11-27 06:28:21.972523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:85888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.659 [2024-11-27 06:28:21.972538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:85896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.659 [2024-11-27 06:28:21.972560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:85904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.659 [2024-11-27 06:28:21.972574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:85912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.659 [2024-11-27 06:28:21.972592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:85920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.659 [2024-11-27 06:28:21.972606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:85928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.659 [2024-11-27 06:28:21.972621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:85936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.659 [2024-11-27 06:28:21.972635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:85944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.659 [2024-11-27 06:28:21.972650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:85952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.659 [2024-11-27 06:28:21.972664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:85960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.659 [2024-11-27 06:28:21.972679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:85968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.659 [2024-11-27 06:28:21.972694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:86432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.659 [2024-11-27 06:28:21.972710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.659 [2024-11-27 06:28:21.972724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:86448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.659 [2024-11-27 06:28:21.972740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:86456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.659 [2024-11-27 06:28:21.972754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.659 [2024-11-27 06:28:21.972762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:86464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.660 [2024-11-27 06:28:21.972772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:21.972780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:86472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.660 [2024-11-27 06:28:21.972786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:21.972794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:86480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.660 [2024-11-27 06:28:21.972801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:21.972809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:86488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.660 [2024-11-27 06:28:21.972815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:21.972824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:86496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.660 [2024-11-27 06:28:21.972830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:21.972838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:86504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.660 [2024-11-27 06:28:21.972844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:21.972853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:86512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.660 [2024-11-27 06:28:21.972859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:21.972867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:86520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.660 [2024-11-27 06:28:21.972873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:21.972881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:86528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.660 [2024-11-27 06:28:21.972888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:21.972896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.660 [2024-11-27 06:28:21.972902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:21.972917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:86544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.660 [2024-11-27 06:28:21.972923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:21.972931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:86552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.660 [2024-11-27 06:28:21.972937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:21.972947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:86560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.660 [2024-11-27 06:28:21.972953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:21.972964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:86568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.660 [2024-11-27 06:28:21.972971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:21.972979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:86576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.660 [2024-11-27 06:28:21.972987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:21.972995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.660 [2024-11-27 06:28:21.973001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:21.973009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:86592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.660 [2024-11-27 06:28:21.973016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:21.973024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:86600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.660 [2024-11-27 06:28:21.973030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:21.973038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.660 [2024-11-27 06:28:21.973045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:21.973052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.660 [2024-11-27 06:28:21.973059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:21.973067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:86624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.660 [2024-11-27 06:28:21.973073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:21.973081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:86632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.660 [2024-11-27 06:28:21.973088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:21.973096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.660 [2024-11-27 06:28:21.973102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:21.973110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:86648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.660 [2024-11-27 06:28:21.973117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:21.973125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:86656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.660 [2024-11-27 06:28:21.973131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:21.973139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:86664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.660 [2024-11-27 06:28:21.973148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:21.973156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:86672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.660 [2024-11-27 06:28:21.973163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:21.973171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:86680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.660 [2024-11-27 06:28:21.973177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:21.973198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:13.660 [2024-11-27 06:28:21.973204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86688 len:8 PRP1 0x0 PRP2 0x0 00:26:13.660 [2024-11-27 06:28:21.973211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:21.973220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:13.660 [2024-11-27 06:28:21.973225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:13.660 [2024-11-27 06:28:21.973231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86696 len:8 PRP1 0x0 PRP2 0x0 00:26:13.660 [2024-11-27 06:28:21.973238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:21.973272] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:26:13.660 [2024-11-27 06:28:21.973304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:13.660 [2024-11-27 06:28:21.973312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:21.973320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:13.660 [2024-11-27 06:28:21.973326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:21.973334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:13.660 [2024-11-27 06:28:21.973340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:21.973347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:13.660 [2024-11-27 06:28:21.973354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:21.973361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:13.660 [2024-11-27 06:28:21.973381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b6750 (9): Bad file descriptor 00:26:13.660 [2024-11-27 06:28:21.975898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:13.660 [2024-11-27 06:28:22.002074] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:26:13.660 12584.00 IOPS, 49.16 MiB/s [2024-11-27T06:28:33.190Z] 12619.67 IOPS, 49.30 MiB/s [2024-11-27T06:28:33.190Z] 12649.86 IOPS, 49.41 MiB/s [2024-11-27T06:28:33.190Z] 12664.62 IOPS, 49.47 MiB/s [2024-11-27T06:28:33.190Z] 12685.44 IOPS, 49.55 MiB/s [2024-11-27T06:28:33.190Z] [2024-11-27 06:28:26.406572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.660 [2024-11-27 06:28:26.406634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:26.406649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.660 [2024-11-27 06:28:26.406657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:26.406665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.660 [2024-11-27 06:28:26.406672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:26.406680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.660 [2024-11-27 06:28:26.406686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:26.406695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.660 [2024-11-27 06:28:26.406701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:26.406709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.660 [2024-11-27 06:28:26.406716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:26.406724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.660 [2024-11-27 06:28:26.406731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:26.406739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.660 [2024-11-27 06:28:26.406745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:26.406753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.660 [2024-11-27 06:28:26.406760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:26.406768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.660 [2024-11-27 06:28:26.406774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:26.406783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.660 [2024-11-27 06:28:26.406789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:26.406797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.660 [2024-11-27 06:28:26.406804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.660 [2024-11-27 06:28:26.406812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.406819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.406826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.406839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.406847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.406855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.406863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.406869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.406877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.406884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.406894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.406901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.406916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.406923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.406931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.406937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.406945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.406952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.406960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.406967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.406975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.406982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.406990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.406997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.407005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.407011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.407019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.407026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.407037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:25000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.407044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.407052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.407058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.407066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.407073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.407081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.407087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.407095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.407101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.407109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.407116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.407124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:25048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.407130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.407138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.407145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.407153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.407160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.407168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.407174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.407182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.407189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.407197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.407204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.407212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.407222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.407231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.407239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.407247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.407254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.407263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.407269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.407277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.407284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.407292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:25136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.407298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.407306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.407312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.407320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.407327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.407335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.407342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.407350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.407356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.407364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.407370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.407379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.407385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.407393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.407401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.407412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.407418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.407427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.407434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.407442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.407449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.407457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.407463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.407472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.407478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.407487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.407493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.407501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.407508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.407516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.407523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.407531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.407538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.407546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.661 [2024-11-27 06:28:26.407552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.661 [2024-11-27 06:28:26.407560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.662 [2024-11-27 06:28:26.407566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.407575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.662 [2024-11-27 06:28:26.407581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.407590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.662 [2024-11-27 06:28:26.407599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.407608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.662 [2024-11-27 06:28:26.407614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.407623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.662 [2024-11-27 06:28:26.407629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.407638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.662 [2024-11-27 06:28:26.407644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.407653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:25328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.662 [2024-11-27 06:28:26.407660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.407669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.662 [2024-11-27 06:28:26.407676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.407684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.662 [2024-11-27 06:28:26.407690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.407698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.662 [2024-11-27 06:28:26.407705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.407713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.662 [2024-11-27 06:28:26.407720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.407728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.662 [2024-11-27 06:28:26.407734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.407742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.662 [2024-11-27 06:28:26.407749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.407757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.662 [2024-11-27 06:28:26.407763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.407771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.662 [2024-11-27 06:28:26.407779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.407786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.662 [2024-11-27 06:28:26.407795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.407804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.662 [2024-11-27 06:28:26.407810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.407818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.662 [2024-11-27 06:28:26.407825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.407833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.662 [2024-11-27 06:28:26.407839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.407848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.662 [2024-11-27 06:28:26.407854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.407862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.662 [2024-11-27 06:28:26.407869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.407880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.662 [2024-11-27 06:28:26.407887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.407895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.662 [2024-11-27 06:28:26.407901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.407918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.662 [2024-11-27 06:28:26.407924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.407932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.662 [2024-11-27 06:28:26.407939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.407947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.662 [2024-11-27 06:28:26.407954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.407962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.662 [2024-11-27 06:28:26.407968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.407977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.662 [2024-11-27 06:28:26.407983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.407996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.662 [2024-11-27 06:28:26.408003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.408011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.662 [2024-11-27 06:28:26.408018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.408027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.662 [2024-11-27 06:28:26.408034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.408042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.662 [2024-11-27 06:28:26.408049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.408057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.662 [2024-11-27 06:28:26.408063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.408071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.662 [2024-11-27 06:28:26.408079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.408087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.662 [2024-11-27 06:28:26.408093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.408101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.662 [2024-11-27 06:28:26.408108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.408116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.662 [2024-11-27 06:28:26.408123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.408134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.662 [2024-11-27 06:28:26.408140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.408149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.662 [2024-11-27 06:28:26.408155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.408163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.662 [2024-11-27 06:28:26.408170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.408179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.662 [2024-11-27 06:28:26.408189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.408197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.662 [2024-11-27 06:28:26.408203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.408211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.662 [2024-11-27 06:28:26.408218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.408226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.662 [2024-11-27 06:28:26.408233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.408241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.662 [2024-11-27 06:28:26.408248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.408256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.662 [2024-11-27 06:28:26.408262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.408271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.662 [2024-11-27 06:28:26.408277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.408285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.662 [2024-11-27 06:28:26.408292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.408300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.662 [2024-11-27 06:28:26.408307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.408315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.662 [2024-11-27 06:28:26.408321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.408329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.662 [2024-11-27 06:28:26.408336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.662 [2024-11-27 06:28:26.408345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.663 [2024-11-27 06:28:26.408353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.663 [2024-11-27 06:28:26.408361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.663 [2024-11-27 06:28:26.408368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.663 [2024-11-27 06:28:26.408381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:25584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.663 [2024-11-27 06:28:26.408387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.663 [2024-11-27 06:28:26.408395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:25592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.663 [2024-11-27 06:28:26.408402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.663 [2024-11-27 06:28:26.408410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.663 [2024-11-27 06:28:26.408417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.663 [2024-11-27 06:28:26.408425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.663 [2024-11-27 06:28:26.408432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.663 [2024-11-27 06:28:26.408441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.663 [2024-11-27 06:28:26.408447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.663 [2024-11-27 06:28:26.408458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.663 [2024-11-27 06:28:26.408465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.663 [2024-11-27 06:28:26.408474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.663 [2024-11-27 06:28:26.408480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.663 [2024-11-27 06:28:26.408488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.663 [2024-11-27 06:28:26.408495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.663 [2024-11-27 06:28:26.408503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:25648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.663 [2024-11-27 06:28:26.408510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.663 [2024-11-27 06:28:26.408518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:25656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.663 [2024-11-27 06:28:26.408525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.663 [2024-11-27 06:28:26.408532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:25664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.663 [2024-11-27 06:28:26.408539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.663 [2024-11-27 06:28:26.408554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.663 [2024-11-27 06:28:26.408561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.663 [2024-11-27 06:28:26.408569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:25680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.663 [2024-11-27 06:28:26.408578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.663 [2024-11-27 06:28:26.408586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9040 is same with the state(6) to be set 00:26:13.663 [2024-11-27 06:28:26.408595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:13.663 [2024-11-27 06:28:26.408599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:13.663 [2024-11-27 06:28:26.408605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25688 len:8 PRP1 0x0 PRP2 0x0 00:26:13.663 [2024-11-27 06:28:26.408611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.663 [2024-11-27 06:28:26.408645] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:26:13.663 [2024-11-27 06:28:26.408679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:13.663 [2024-11-27 06:28:26.408688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.663 [2024-11-27 06:28:26.408696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:13.663 [2024-11-27 06:28:26.408703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.663 [2024-11-27 06:28:26.408710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:13.663 [2024-11-27 06:28:26.408717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.663 [2024-11-27 06:28:26.408724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:13.663 [2024-11-27 06:28:26.408731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.663 [2024-11-27 06:28:26.408737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:26:13.663 [2024-11-27 06:28:26.411288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:26:13.663 [2024-11-27 06:28:26.411312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b6750 (9): Bad file descriptor 00:26:13.663 [2024-11-27 06:28:26.436679] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:26:13.663 12653.70 IOPS, 49.43 MiB/s [2024-11-27T06:28:33.193Z] 12676.73 IOPS, 49.52 MiB/s [2024-11-27T06:28:33.193Z] 12693.25 IOPS, 49.58 MiB/s [2024-11-27T06:28:33.193Z] 12715.08 IOPS, 49.67 MiB/s [2024-11-27T06:28:33.193Z] 12539.07 IOPS, 48.98 MiB/s [2024-11-27T06:28:33.193Z] 12377.20 IOPS, 48.35 MiB/s 00:26:13.663 Latency(us) 00:26:13.663 [2024-11-27T06:28:33.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:13.663 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:13.663 Verification LBA range: start 0x0 length 0x4000 00:26:13.663 NVMe0n1 : 15.01 12375.68 48.34 306.22 0.00 10071.73 384.39 18450.90 00:26:13.663 [2024-11-27T06:28:33.193Z] =================================================================================================================== 00:26:13.663 [2024-11-27T06:28:33.193Z] Total : 12375.68 48.34 306.22 0.00 10071.73 384.39 18450.90 00:26:13.663 Received shutdown signal, test time was about 15.000000 seconds 00:26:13.663 00:26:13.663 Latency(us) 00:26:13.663 [2024-11-27T06:28:33.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:13.663 [2024-11-27T06:28:33.193Z] =================================================================================================================== 00:26:13.663 [2024-11-27T06:28:33.193Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:13.663 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:13.663 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:26:13.663 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:26:13.663 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=107830 00:26:13.663 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:13.663 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 107830 /var/tmp/bdevperf.sock 00:26:13.663 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 107830 ']' 00:26:13.663 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:13.663 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:13.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:13.663 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:13.663 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:13.663 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:13.663 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:13.663 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:13.663 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:26:13.663 [2024-11-27 06:28:32.962762] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:26:13.663 06:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:26:13.663 [2024-11-27 06:28:33.162872] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:26:13.921 06:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:13.921 NVMe0n1 00:26:14.179 06:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:14.179 00:26:14.437 06:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:14.437 00:26:14.696 06:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:14.696 06:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:26:14.696 06:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:14.954 06:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:26:18.291 06:28:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:18.291 06:28:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:26:18.291 06:28:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=107953 00:26:18.291 06:28:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:18.291 06:28:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 107953 00:26:19.224 { 00:26:19.224 "results": [ 00:26:19.224 { 00:26:19.224 "job": "NVMe0n1", 00:26:19.224 "core_mask": "0x1", 00:26:19.224 "workload": "verify", 00:26:19.224 "status": "finished", 00:26:19.224 "verify_range": { 00:26:19.224 "start": 0, 00:26:19.224 "length": 16384 00:26:19.224 }, 00:26:19.224 "queue_depth": 128, 00:26:19.224 "io_size": 4096, 00:26:19.224 "runtime": 1.007433, 00:26:19.224 "iops": 13262.420428951602, 00:26:19.224 "mibps": 51.806329800592195, 00:26:19.224 "io_failed": 0, 00:26:19.224 "io_timeout": 0, 00:26:19.224 "avg_latency_us": 9612.493922495438, 00:26:19.224 "min_latency_us": 1569.083076923077, 00:26:19.224 "max_latency_us": 11141.12 00:26:19.224 } 00:26:19.224 ], 00:26:19.224 "core_count": 1 00:26:19.224 } 00:26:19.224 06:28:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:19.224 [2024-11-27 06:28:32.554767] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:26:19.224 [2024-11-27 06:28:32.554830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107830 ] 00:26:19.224 [2024-11-27 06:28:32.665369] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:19.224 [2024-11-27 06:28:32.690174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.224 [2024-11-27 06:28:32.708147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.224 [2024-11-27 06:28:34.357164] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:26:19.224 [2024-11-27 06:28:34.357244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.224 [2024-11-27 06:28:34.357257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.224 [2024-11-27 06:28:34.357266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.224 [2024-11-27 06:28:34.357273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.224 [2024-11-27 06:28:34.357280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.224 [2024-11-27 06:28:34.357287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.224 [2024-11-27 06:28:34.357294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.224 [2024-11-27 06:28:34.357300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.224 [2024-11-27 06:28:34.357307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:26:19.224 [2024-11-27 06:28:34.357334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:26:19.224 [2024-11-27 06:28:34.357349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8c750 (9): Bad file descriptor 00:26:19.224 [2024-11-27 06:28:34.359726] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:26:19.224 Running I/O for 1 seconds... 00:26:19.224 13222.00 IOPS, 51.65 MiB/s 00:26:19.224 Latency(us) 00:26:19.224 [2024-11-27T06:28:38.754Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:19.224 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:19.224 Verification LBA range: start 0x0 length 0x4000 00:26:19.224 NVMe0n1 : 1.01 13262.42 51.81 0.00 0.00 9612.49 1569.08 11141.12 00:26:19.224 [2024-11-27T06:28:38.754Z] =================================================================================================================== 00:26:19.224 [2024-11-27T06:28:38.754Z] Total : 13262.42 51.81 0.00 0.00 9612.49 1569.08 11141.12 00:26:19.225 06:28:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:26:19.225 06:28:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:19.483 06:28:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:19.741 06:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:19.741 06:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:26:20.001 06:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:20.001 06:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:26:23.292 06:28:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:23.292 06:28:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:26:23.292 06:28:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 107830 00:26:23.292 06:28:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 107830 ']' 00:26:23.292 06:28:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 107830 00:26:23.292 06:28:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:23.292 06:28:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:23.292 06:28:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107830 00:26:23.292 06:28:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:23.292 06:28:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:23.292 killing process with pid 107830 00:26:23.292 06:28:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107830' 00:26:23.292 06:28:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 107830 00:26:23.292 06:28:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 107830 00:26:23.292 06:28:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:26:23.550 06:28:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:23.550 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:23.550 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:23.550 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:26:23.550 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:23.550 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:26:23.809 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:23.809 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:26:23.809 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:23.809 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:23.809 rmmod nvme_tcp 00:26:23.809 rmmod nvme_fabrics 00:26:23.809 rmmod nvme_keyring 00:26:23.809 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:23.809 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:26:23.809 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:26:23.809 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 107480 ']' 00:26:23.809 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 107480 00:26:23.809 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 107480 ']' 00:26:23.809 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 107480 00:26:23.809 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:23.809 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:23.809 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107480 00:26:23.809 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:23.809 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:23.809 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107480' 00:26:23.809 killing process with pid 107480 00:26:23.809 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 107480 00:26:23.809 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 107480 00:26:23.809 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:23.809 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:23.809 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:23.809 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:26:23.809 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:26:23.809 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:23.809 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:26:23.809 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:23.809 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:23.809 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:23.809 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:23.809 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:23.809 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:24.068 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:24.068 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:24.068 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:24.068 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:24.068 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:24.068 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:24.068 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:24.068 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:24.068 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:24.068 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:24.069 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.069 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:24.069 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:24.069 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:26:24.069 00:26:24.069 real 0m29.912s 00:26:24.069 user 1m56.078s 00:26:24.069 sys 0m3.262s 00:26:24.069 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:24.069 06:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:24.069 ************************************ 00:26:24.069 END TEST nvmf_failover 00:26:24.069 ************************************ 00:26:24.069 06:28:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:24.069 06:28:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:24.069 06:28:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:24.069 06:28:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.069 ************************************ 00:26:24.069 START TEST nvmf_host_discovery 00:26:24.069 ************************************ 00:26:24.069 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:24.328 * Looking for test storage... 00:26:24.328 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:24.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.329 --rc genhtml_branch_coverage=1 00:26:24.329 --rc genhtml_function_coverage=1 00:26:24.329 --rc genhtml_legend=1 00:26:24.329 --rc geninfo_all_blocks=1 00:26:24.329 --rc geninfo_unexecuted_blocks=1 00:26:24.329 00:26:24.329 ' 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:24.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.329 --rc genhtml_branch_coverage=1 00:26:24.329 --rc genhtml_function_coverage=1 00:26:24.329 --rc genhtml_legend=1 00:26:24.329 --rc geninfo_all_blocks=1 00:26:24.329 --rc geninfo_unexecuted_blocks=1 00:26:24.329 00:26:24.329 ' 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:24.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.329 --rc genhtml_branch_coverage=1 00:26:24.329 --rc genhtml_function_coverage=1 00:26:24.329 --rc genhtml_legend=1 00:26:24.329 --rc geninfo_all_blocks=1 00:26:24.329 --rc geninfo_unexecuted_blocks=1 00:26:24.329 00:26:24.329 ' 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:24.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.329 --rc genhtml_branch_coverage=1 00:26:24.329 --rc genhtml_function_coverage=1 00:26:24.329 --rc genhtml_legend=1 00:26:24.329 --rc geninfo_all_blocks=1 00:26:24.329 --rc geninfo_unexecuted_blocks=1 00:26:24.329 00:26:24.329 ' 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:26:24.329 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:24.330 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:24.330 Cannot find device "nvmf_init_br" 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:24.330 Cannot find device "nvmf_init_br2" 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:24.330 Cannot find device "nvmf_tgt_br" 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:24.330 Cannot find device "nvmf_tgt_br2" 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:24.330 Cannot find device "nvmf_init_br" 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:24.330 Cannot find device "nvmf_init_br2" 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:24.330 Cannot find device "nvmf_tgt_br" 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:24.330 Cannot find device "nvmf_tgt_br2" 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:24.330 Cannot find device "nvmf_br" 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:24.330 Cannot find device "nvmf_init_if" 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:26:24.330 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:24.589 Cannot find device "nvmf_init_if2" 00:26:24.589 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:26:24.589 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:24.589 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:24.589 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:26:24.589 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:24.589 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:24.589 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:26:24.589 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:24.589 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:24.589 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:24.589 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:24.589 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:24.589 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:24.589 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:24.589 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:24.589 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:24.589 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:24.589 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:24.589 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:24.589 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:24.589 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:24.589 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:24.589 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:24.589 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:24.589 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:24.589 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:24.590 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:24.590 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:24.590 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:24.590 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:24.590 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:24.590 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:24.590 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:24.590 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:24.590 06:28:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:24.590 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:24.590 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:24.590 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:24.590 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:24.590 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:24.590 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:24.590 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:26:24.590 00:26:24.590 --- 10.0.0.3 ping statistics --- 00:26:24.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.590 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:26:24.590 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:24.590 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:24.590 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.075 ms 00:26:24.590 00:26:24.590 --- 10.0.0.4 ping statistics --- 00:26:24.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.590 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:26:24.590 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:24.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:24.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:26:24.590 00:26:24.590 --- 10.0.0.1 ping statistics --- 00:26:24.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.590 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:26:24.590 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:24.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:24.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:26:24.590 00:26:24.590 --- 10.0.0.2 ping statistics --- 00:26:24.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.590 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:26:24.590 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:24.590 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:26:24.590 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:24.590 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:24.590 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:24.590 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:24.590 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:24.590 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:24.590 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:24.590 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:24.590 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:24.590 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:24.590 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.590 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=108297 00:26:24.590 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:24.590 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 108297 00:26:24.590 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 108297 ']' 00:26:24.590 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:24.590 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:24.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:24.590 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:24.590 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:24.590 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.590 [2024-11-27 06:28:44.077061] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:26:24.590 [2024-11-27 06:28:44.077126] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:24.849 [2024-11-27 06:28:44.192654] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:24.849 [2024-11-27 06:28:44.221529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.849 [2024-11-27 06:28:44.239550] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:24.849 [2024-11-27 06:28:44.239588] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:24.849 [2024-11-27 06:28:44.239594] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:24.849 [2024-11-27 06:28:44.239599] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:24.849 [2024-11-27 06:28:44.239603] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:24.849 [2024-11-27 06:28:44.239856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:25.417 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:25.417 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:25.417 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:25.417 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:25.417 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.676 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:25.676 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:25.676 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.676 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.676 [2024-11-27 06:28:44.956875] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:25.676 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.676 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:26:25.676 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.676 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.676 [2024-11-27 06:28:44.964962] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:26:25.676 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.676 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:25.676 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.676 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.676 null0 00:26:25.676 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.676 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:25.676 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.676 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.676 null1 00:26:25.676 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.676 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:25.676 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.676 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.676 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.676 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=108347 00:26:25.676 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 108347 /tmp/host.sock 00:26:25.676 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 108347 ']' 00:26:25.676 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:25.676 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:25.676 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:25.676 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:25.676 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:25.676 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:25.676 06:28:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.676 [2024-11-27 06:28:45.036171] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:26:25.676 [2024-11-27 06:28:45.036237] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108347 ] 00:26:25.677 [2024-11-27 06:28:45.150128] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:25.677 [2024-11-27 06:28:45.180278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.677 [2024-11-27 06:28:45.199381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:26.614 06:28:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:26.614 06:28:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:26.614 06:28:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:26.614 06:28:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:26.614 06:28:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.614 06:28:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.614 06:28:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.614 06:28:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:26.614 06:28:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.614 06:28:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.614 06:28:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.614 06:28:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:26.614 06:28:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:26.614 06:28:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:26.614 06:28:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:26.614 06:28:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:26.614 06:28:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.614 06:28:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.614 06:28:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:26.614 06:28:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.614 06:28:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:26.614 06:28:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:26.614 06:28:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:26.614 06:28:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.614 06:28:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.615 06:28:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:26.615 06:28:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:26.615 06:28:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:26.615 06:28:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.615 06:28:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:26.615 06:28:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:26.615 06:28:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.615 06:28:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:26.615 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.873 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:26.873 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:26:26.873 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.873 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.873 [2024-11-27 06:28:46.157156] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:26.873 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.873 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:26.873 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:26.873 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:26:26.874 06:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:27.441 [2024-11-27 06:28:46.929337] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:26:27.441 [2024-11-27 06:28:46.929364] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:26:27.441 [2024-11-27 06:28:46.929377] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:26:27.744 [2024-11-27 06:28:47.015443] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:26:27.744 [2024-11-27 06:28:47.069738] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:26:27.744 [2024-11-27 06:28:47.070365] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xf9ce00:1 started. 00:26:27.744 [2024-11-27 06:28:47.071902] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:26:27.744 [2024-11-27 06:28:47.071933] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:26:27.744 [2024-11-27 06:28:47.078073] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xf9ce00 was disconnected and freed. delete nvme_qpair. 00:26:28.003 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:28.003 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:28.003 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:28.003 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:28.003 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.004 [2024-11-27 06:28:47.470856] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xf9d240:1 started. 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:28.004 [2024-11-27 06:28:47.478072] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xf9d240 was disconnected and freed. delete nvme_qpair. 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.004 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.004 [2024-11-27 06:28:47.529624] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:26:28.004 [2024-11-27 06:28:47.530210] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:26:28.004 [2024-11-27 06:28:47.530231] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:26:28.262 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.263 [2024-11-27 06:28:47.616291] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:28.263 06:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:28.263 [2024-11-27 06:28:47.674606] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:26:28.263 [2024-11-27 06:28:47.674652] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:26:28.263 [2024-11-27 06:28:47.674659] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:26:28.263 [2024-11-27 06:28:47.674663] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.198 [2024-11-27 06:28:48.710160] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:26:29.198 [2024-11-27 06:28:48.710189] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.198 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:29.198 [2024-11-27 06:28:48.719925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.199 [2024-11-27 06:28:48.719951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.199 [2024-11-27 06:28:48.719959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.199 [2024-11-27 06:28:48.719964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.199 [2024-11-27 06:28:48.719969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.199 [2024-11-27 06:28:48.719973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.199 [2024-11-27 06:28:48.719978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.199 [2024-11-27 06:28:48.719982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.199 [2024-11-27 06:28:48.719987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6d610 is same with the state(6) to be set 00:26:29.458 [2024-11-27 06:28:48.729886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6d610 (9): Bad file descriptor 00:26:29.458 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.458 [2024-11-27 06:28:48.739895] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:29.458 [2024-11-27 06:28:48.739919] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:29.458 [2024-11-27 06:28:48.739922] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:29.458 [2024-11-27 06:28:48.739925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:29.458 [2024-11-27 06:28:48.739946] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:29.458 [2024-11-27 06:28:48.740013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-27 06:28:48.740030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf6d610 with addr=10.0.0.3, port=4420 00:26:29.459 [2024-11-27 06:28:48.740037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6d610 is same with the state(6) to be set 00:26:29.459 [2024-11-27 06:28:48.740046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6d610 (9): Bad file descriptor 00:26:29.459 [2024-11-27 06:28:48.740054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:29.459 [2024-11-27 06:28:48.740058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:29.459 [2024-11-27 06:28:48.740063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:29.459 [2024-11-27 06:28:48.740068] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:29.459 [2024-11-27 06:28:48.740071] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:29.459 [2024-11-27 06:28:48.740074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:29.459 [2024-11-27 06:28:48.749953] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:29.459 [2024-11-27 06:28:48.749969] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:29.459 [2024-11-27 06:28:48.749972] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:29.459 [2024-11-27 06:28:48.749975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:29.459 [2024-11-27 06:28:48.749992] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:29.459 [2024-11-27 06:28:48.750032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-27 06:28:48.750044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf6d610 with addr=10.0.0.3, port=4420 00:26:29.459 [2024-11-27 06:28:48.750050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6d610 is same with the state(6) to be set 00:26:29.459 [2024-11-27 06:28:48.750057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6d610 (9): Bad file descriptor 00:26:29.459 [2024-11-27 06:28:48.750064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:29.459 [2024-11-27 06:28:48.750068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:29.459 [2024-11-27 06:28:48.750073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:29.459 [2024-11-27 06:28:48.750077] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:29.459 [2024-11-27 06:28:48.750079] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:29.459 [2024-11-27 06:28:48.750082] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:29.459 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.459 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:29.459 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:29.459 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:29.459 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:29.459 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:29.459 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:29.459 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:29.459 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:29.459 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.459 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.459 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:29.459 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:29.459 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:29.459 [2024-11-27 06:28:48.760000] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:29.459 [2024-11-27 06:28:48.760018] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:29.459 [2024-11-27 06:28:48.760021] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:29.459 [2024-11-27 06:28:48.760024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:29.459 [2024-11-27 06:28:48.760040] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:29.459 [2024-11-27 06:28:48.760073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-27 06:28:48.760081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf6d610 with addr=10.0.0.3, port=4420 00:26:29.459 [2024-11-27 06:28:48.760086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6d610 is same with the state(6) to be set 00:26:29.459 [2024-11-27 06:28:48.760093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6d610 (9): Bad file descriptor 00:26:29.459 [2024-11-27 06:28:48.760101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:29.459 [2024-11-27 06:28:48.760105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:29.459 [2024-11-27 06:28:48.760110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:29.459 [2024-11-27 06:28:48.760114] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:29.459 [2024-11-27 06:28:48.760116] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:29.459 [2024-11-27 06:28:48.760119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:29.459 [2024-11-27 06:28:48.770048] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:29.459 [2024-11-27 06:28:48.770063] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:29.459 [2024-11-27 06:28:48.770066] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:29.459 [2024-11-27 06:28:48.770069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:29.459 [2024-11-27 06:28:48.770083] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:29.459 [2024-11-27 06:28:48.770122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-27 06:28:48.770130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf6d610 with addr=10.0.0.3, port=4420 00:26:29.459 [2024-11-27 06:28:48.770135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6d610 is same with the state(6) to be set 00:26:29.459 [2024-11-27 06:28:48.770142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6d610 (9): Bad file descriptor 00:26:29.459 [2024-11-27 06:28:48.770149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:29.459 [2024-11-27 06:28:48.770153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:29.459 [2024-11-27 06:28:48.770157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:29.459 [2024-11-27 06:28:48.770161] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:29.459 [2024-11-27 06:28:48.770164] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:29.459 [2024-11-27 06:28:48.770166] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:29.459 [2024-11-27 06:28:48.780090] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:29.459 [2024-11-27 06:28:48.780104] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:29.459 [2024-11-27 06:28:48.780107] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:29.459 [2024-11-27 06:28:48.780110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:29.459 [2024-11-27 06:28:48.780124] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:29.459 [2024-11-27 06:28:48.780152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-27 06:28:48.780162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf6d610 with addr=10.0.0.3, port=4420 00:26:29.459 [2024-11-27 06:28:48.780167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6d610 is same with the state(6) to be set 00:26:29.459 [2024-11-27 06:28:48.780174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6d610 (9): Bad file descriptor 00:26:29.459 [2024-11-27 06:28:48.780180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:29.459 [2024-11-27 06:28:48.780184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:29.459 [2024-11-27 06:28:48.780189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:29.459 [2024-11-27 06:28:48.780193] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:29.459 [2024-11-27 06:28:48.780195] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:29.459 [2024-11-27 06:28:48.780197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:29.459 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.459 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:29.459 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:29.460 [2024-11-27 06:28:48.790130] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:29.460 [2024-11-27 06:28:48.790144] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:29.460 [2024-11-27 06:28:48.790146] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:29.460 [2024-11-27 06:28:48.790149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:29.460 [2024-11-27 06:28:48.790165] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:29.460 [2024-11-27 06:28:48.790196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-11-27 06:28:48.790204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf6d610 with addr=10.0.0.3, port=4420 00:26:29.460 [2024-11-27 06:28:48.790209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6d610 is same with the state(6) to be set 00:26:29.460 [2024-11-27 06:28:48.790216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6d610 (9): Bad file descriptor 00:26:29.460 [2024-11-27 06:28:48.790227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:29.460 [2024-11-27 06:28:48.790231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:29.460 [2024-11-27 06:28:48.790235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:29.460 [2024-11-27 06:28:48.790239] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:29.460 [2024-11-27 06:28:48.790242] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:29.460 [2024-11-27 06:28:48.790244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:29.460 [2024-11-27 06:28:48.796243] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:26:29.460 [2024-11-27 06:28:48.796260] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:29.460 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:29.461 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.461 06:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.833 [2024-11-27 06:28:49.969511] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:26:30.833 [2024-11-27 06:28:49.969538] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:26:30.833 [2024-11-27 06:28:49.969547] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:26:30.833 [2024-11-27 06:28:50.055653] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:26:30.833 [2024-11-27 06:28:50.114043] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:26:30.833 [2024-11-27 06:28:50.114526] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xfa37c0:1 started. 00:26:30.833 [2024-11-27 06:28:50.116174] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:26:30.833 [2024-11-27 06:28:50.116212] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:30.833 [2024-11-27 06:28:50.118739] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xfa37c0 was disconnected and freed. delete nvme_qpair. 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.833 2024/11/27 06:28:50 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:26:30.833 request: 00:26:30.833 { 00:26:30.833 "method": "bdev_nvme_start_discovery", 00:26:30.833 "params": { 00:26:30.833 "name": "nvme", 00:26:30.833 "trtype": "tcp", 00:26:30.833 "traddr": "10.0.0.3", 00:26:30.833 "adrfam": "ipv4", 00:26:30.833 "trsvcid": "8009", 00:26:30.833 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:30.833 "wait_for_attach": true 00:26:30.833 } 00:26:30.833 } 00:26:30.833 Got JSON-RPC error response 00:26:30.833 GoRPCClient: error on JSON-RPC call 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.833 2024/11/27 06:28:50 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:26:30.833 request: 00:26:30.833 { 00:26:30.833 "method": "bdev_nvme_start_discovery", 00:26:30.833 "params": { 00:26:30.833 "name": "nvme_second", 00:26:30.833 "trtype": "tcp", 00:26:30.833 "traddr": "10.0.0.3", 00:26:30.833 "adrfam": "ipv4", 00:26:30.833 "trsvcid": "8009", 00:26:30.833 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:30.833 "wait_for_attach": true 00:26:30.833 } 00:26:30.833 } 00:26:30.833 Got JSON-RPC error response 00:26:30.833 GoRPCClient: error on JSON-RPC call 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:30.833 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.834 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.834 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:30.834 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:30.834 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.834 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:30.834 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:30.834 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:30.834 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:30.834 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:30.834 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:30.834 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:30.834 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:30.834 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:30.834 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.834 06:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:32.209 [2024-11-27 06:28:51.296852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.209 [2024-11-27 06:28:51.296895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf6acd0 with addr=10.0.0.3, port=8010 00:26:32.209 [2024-11-27 06:28:51.296915] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:32.209 [2024-11-27 06:28:51.296921] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:32.209 [2024-11-27 06:28:51.296926] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:26:32.774 [2024-11-27 06:28:52.296849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.774 [2024-11-27 06:28:52.296886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf6acd0 with addr=10.0.0.3, port=8010 00:26:32.774 [2024-11-27 06:28:52.296898] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:32.774 [2024-11-27 06:28:52.296910] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:32.774 [2024-11-27 06:28:52.296916] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:26:34.154 [2024-11-27 06:28:53.296771] bdev_nvme.c:7527:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:26:34.154 2024/11/27 06:28:53 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:26:34.154 request: 00:26:34.154 { 00:26:34.154 "method": "bdev_nvme_start_discovery", 00:26:34.154 "params": { 00:26:34.154 "name": "nvme_second", 00:26:34.154 "trtype": "tcp", 00:26:34.154 "traddr": "10.0.0.3", 00:26:34.154 "adrfam": "ipv4", 00:26:34.154 "trsvcid": "8010", 00:26:34.154 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:34.154 "wait_for_attach": false, 00:26:34.154 "attach_timeout_ms": 3000 00:26:34.154 } 00:26:34.154 } 00:26:34.154 Got JSON-RPC error response 00:26:34.154 GoRPCClient: error on JSON-RPC call 00:26:34.154 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:34.154 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:34.154 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:34.154 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:34.154 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:34.154 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:34.154 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:34.154 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.154 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:34.154 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.154 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:34.154 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:34.154 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.154 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:34.154 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:34.154 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 108347 00:26:34.154 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:34.154 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:34.154 06:28:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:26:39.445 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:39.446 rmmod nvme_tcp 00:26:39.446 rmmod nvme_fabrics 00:26:39.446 rmmod nvme_keyring 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 108297 ']' 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 108297 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 108297 ']' 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 108297 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108297 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:39.446 killing process with pid 108297 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108297' 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 108297 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 108297 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:26:39.446 00:26:39.446 real 0m15.098s 00:26:39.446 user 0m24.131s 00:26:39.446 sys 0m1.252s 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:39.446 ************************************ 00:26:39.446 END TEST nvmf_host_discovery 00:26:39.446 ************************************ 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.446 ************************************ 00:26:39.446 START TEST nvmf_host_multipath_status 00:26:39.446 ************************************ 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:39.446 * Looking for test storage... 00:26:39.446 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:39.446 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:39.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.447 --rc genhtml_branch_coverage=1 00:26:39.447 --rc genhtml_function_coverage=1 00:26:39.447 --rc genhtml_legend=1 00:26:39.447 --rc geninfo_all_blocks=1 00:26:39.447 --rc geninfo_unexecuted_blocks=1 00:26:39.447 00:26:39.447 ' 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:39.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.447 --rc genhtml_branch_coverage=1 00:26:39.447 --rc genhtml_function_coverage=1 00:26:39.447 --rc genhtml_legend=1 00:26:39.447 --rc geninfo_all_blocks=1 00:26:39.447 --rc geninfo_unexecuted_blocks=1 00:26:39.447 00:26:39.447 ' 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:39.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.447 --rc genhtml_branch_coverage=1 00:26:39.447 --rc genhtml_function_coverage=1 00:26:39.447 --rc genhtml_legend=1 00:26:39.447 --rc geninfo_all_blocks=1 00:26:39.447 --rc geninfo_unexecuted_blocks=1 00:26:39.447 00:26:39.447 ' 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:39.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.447 --rc genhtml_branch_coverage=1 00:26:39.447 --rc genhtml_function_coverage=1 00:26:39.447 --rc genhtml_legend=1 00:26:39.447 --rc geninfo_all_blocks=1 00:26:39.447 --rc geninfo_unexecuted_blocks=1 00:26:39.447 00:26:39.447 ' 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:39.447 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:39.447 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:39.448 Cannot find device "nvmf_init_br" 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:39.448 Cannot find device "nvmf_init_br2" 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:39.448 Cannot find device "nvmf_tgt_br" 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:39.448 Cannot find device "nvmf_tgt_br2" 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:39.448 Cannot find device "nvmf_init_br" 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:39.448 Cannot find device "nvmf_init_br2" 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:26:39.448 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:39.709 Cannot find device "nvmf_tgt_br" 00:26:39.709 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:26:39.709 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:39.709 Cannot find device "nvmf_tgt_br2" 00:26:39.709 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:26:39.709 06:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:39.709 Cannot find device "nvmf_br" 00:26:39.709 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:26:39.709 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:39.709 Cannot find device "nvmf_init_if" 00:26:39.709 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:26:39.709 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:39.709 Cannot find device "nvmf_init_if2" 00:26:39.709 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:26:39.709 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:39.709 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:39.709 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:26:39.709 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:39.709 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:39.709 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:26:39.709 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:39.710 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:39.710 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:26:39.710 00:26:39.710 --- 10.0.0.3 ping statistics --- 00:26:39.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.710 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:39.710 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:39.710 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:26:39.710 00:26:39.710 --- 10.0.0.4 ping statistics --- 00:26:39.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.710 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:39.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:39.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:26:39.710 00:26:39.710 --- 10.0.0.1 ping statistics --- 00:26:39.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.710 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:39.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:39.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:26:39.710 00:26:39.710 --- 10.0.0.2 ping statistics --- 00:26:39.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.710 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=108937 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:39.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 108937 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 108937 ']' 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:39.710 06:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:39.970 [2024-11-27 06:28:59.276300] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:26:39.970 [2024-11-27 06:28:59.276363] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:39.970 [2024-11-27 06:28:59.392747] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:39.970 [2024-11-27 06:28:59.419098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:39.970 [2024-11-27 06:28:59.445458] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:39.970 [2024-11-27 06:28:59.445503] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:39.971 [2024-11-27 06:28:59.445510] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:39.971 [2024-11-27 06:28:59.445515] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:39.971 [2024-11-27 06:28:59.445520] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:39.971 [2024-11-27 06:28:59.446454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.971 [2024-11-27 06:28:59.446648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.912 06:29:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:40.912 06:29:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:26:40.912 06:29:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:40.912 06:29:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:40.912 06:29:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:40.912 06:29:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:40.912 06:29:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=108937 00:26:40.912 06:29:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:41.173 [2024-11-27 06:29:00.442816] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:41.173 06:29:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:41.434 Malloc0 00:26:41.434 06:29:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:41.695 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:41.955 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:41.955 [2024-11-27 06:29:01.456367] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:41.955 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:26:42.216 [2024-11-27 06:29:01.672500] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:26:42.216 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:42.216 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=109034 00:26:42.216 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:42.216 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 109034 /var/tmp/bdevperf.sock 00:26:42.216 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 109034 ']' 00:26:42.216 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:42.216 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:42.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:42.216 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:42.216 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:42.216 06:29:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:43.604 06:29:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:43.604 06:29:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:26:43.604 06:29:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:43.604 06:29:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:43.866 Nvme0n1 00:26:43.866 06:29:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:44.128 Nvme0n1 00:26:44.129 06:29:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:44.129 06:29:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:46.673 06:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:46.673 06:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:26:46.673 06:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:26:46.673 06:29:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:47.613 06:29:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:47.613 06:29:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:47.613 06:29:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:47.613 06:29:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.872 06:29:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:47.872 06:29:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:47.872 06:29:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.872 06:29:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:48.130 06:29:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:48.130 06:29:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:48.130 06:29:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.130 06:29:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:48.391 06:29:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.391 06:29:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:48.391 06:29:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.391 06:29:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:48.391 06:29:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.391 06:29:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:48.391 06:29:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:48.391 06:29:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.652 06:29:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.652 06:29:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:48.652 06:29:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.652 06:29:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:48.912 06:29:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.912 06:29:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:48.912 06:29:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:26:49.173 06:29:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:26:49.433 06:29:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:50.376 06:29:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:50.376 06:29:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:50.376 06:29:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.376 06:29:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:50.638 06:29:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:50.638 06:29:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:50.638 06:29:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.638 06:29:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:50.638 06:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.638 06:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:50.899 06:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.899 06:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:50.899 06:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.899 06:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:50.899 06:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.899 06:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:51.160 06:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.160 06:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:51.160 06:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:51.160 06:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.421 06:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.421 06:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:51.421 06:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.421 06:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:51.682 06:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.682 06:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:51.682 06:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:26:51.943 06:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:26:51.943 06:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:53.325 06:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:53.325 06:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:53.325 06:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.325 06:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:53.325 06:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.325 06:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:53.325 06:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.325 06:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:53.585 06:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:53.585 06:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:53.585 06:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:53.585 06:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.846 06:29:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.846 06:29:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:53.846 06:29:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:53.846 06:29:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.846 06:29:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.846 06:29:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:53.846 06:29:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.846 06:29:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:54.107 06:29:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:54.107 06:29:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:54.107 06:29:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:54.107 06:29:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:54.367 06:29:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:54.367 06:29:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:54.367 06:29:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:26:54.627 06:29:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:26:54.887 06:29:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:55.826 06:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:55.826 06:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:55.826 06:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.826 06:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:56.087 06:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:56.087 06:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:56.087 06:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.087 06:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:56.087 06:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:56.087 06:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:56.087 06:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.087 06:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:56.347 06:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:56.347 06:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:56.347 06:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.347 06:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:56.607 06:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:56.607 06:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:56.607 06:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:56.607 06:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.867 06:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:56.867 06:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:56.867 06:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:56.867 06:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.129 06:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:57.129 06:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:57.129 06:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:26:57.129 06:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:26:57.390 06:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:58.392 06:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:58.392 06:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:58.392 06:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.392 06:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:58.654 06:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:58.654 06:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:58.654 06:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.654 06:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:58.916 06:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:58.916 06:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:58.916 06:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.916 06:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:59.177 06:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:59.177 06:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:59.177 06:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.177 06:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:59.177 06:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:59.177 06:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:59.177 06:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:59.177 06:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.439 06:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:59.439 06:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:59.439 06:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.439 06:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:59.700 06:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:59.700 06:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:59.700 06:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:26:59.962 06:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:26:59.962 06:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:27:01.347 06:29:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:27:01.347 06:29:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:01.347 06:29:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.347 06:29:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:01.347 06:29:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:01.347 06:29:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:01.347 06:29:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.347 06:29:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:01.609 06:29:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:01.609 06:29:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:01.610 06:29:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.610 06:29:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:01.610 06:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:01.610 06:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:01.610 06:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:01.610 06:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.872 06:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:01.872 06:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:01.872 06:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.872 06:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:02.133 06:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:02.133 06:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:02.133 06:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.133 06:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:02.395 06:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:02.395 06:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:27:02.657 06:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:27:02.657 06:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:27:02.657 06:29:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:27:02.918 06:29:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:27:03.861 06:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:27:03.861 06:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:03.861 06:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.861 06:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:04.122 06:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:04.122 06:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:04.122 06:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.122 06:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:04.384 06:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:04.384 06:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:04.384 06:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.384 06:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:04.645 06:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:04.645 06:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:04.645 06:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.645 06:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:04.645 06:29:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:04.645 06:29:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:04.645 06:29:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.645 06:29:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:04.905 06:29:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:04.906 06:29:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:04.906 06:29:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.906 06:29:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:05.166 06:29:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:05.166 06:29:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:27:05.166 06:29:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:27:05.427 06:29:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:27:05.689 06:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:27:06.635 06:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:27:06.635 06:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:06.635 06:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.635 06:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:06.894 06:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:06.894 06:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:06.894 06:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.894 06:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:06.894 06:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:06.894 06:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:07.155 06:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.155 06:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:07.155 06:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.155 06:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:07.155 06:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.155 06:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:07.415 06:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.415 06:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:07.415 06:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.415 06:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:07.676 06:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.676 06:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:07.676 06:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:07.676 06:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.937 06:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.937 06:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:07.937 06:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:27:07.937 06:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:27:08.197 06:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:27:09.582 06:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:27:09.582 06:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:09.582 06:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.582 06:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:09.582 06:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.582 06:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:09.582 06:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.582 06:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:09.582 06:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.582 06:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:09.582 06:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.582 06:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:09.854 06:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.855 06:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:09.855 06:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.855 06:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:10.202 06:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:10.202 06:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:10.202 06:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:10.202 06:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:10.463 06:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:10.463 06:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:10.463 06:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:10.463 06:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:10.463 06:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:10.463 06:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:27:10.463 06:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:27:10.722 06:29:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:27:10.982 06:29:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:27:11.921 06:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:27:11.921 06:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:11.921 06:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.921 06:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:12.179 06:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.179 06:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:12.179 06:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:12.179 06:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.437 06:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:12.437 06:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:12.437 06:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:12.437 06:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.698 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.698 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:12.698 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.698 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:12.956 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.956 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:12.956 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:12.956 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.956 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.956 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:12.956 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.956 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:13.214 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:13.214 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 109034 00:27:13.214 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 109034 ']' 00:27:13.214 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 109034 00:27:13.214 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:27:13.214 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:13.214 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109034 00:27:13.214 killing process with pid 109034 00:27:13.214 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:27:13.214 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:27:13.214 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109034' 00:27:13.214 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 109034 00:27:13.214 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 109034 00:27:13.214 { 00:27:13.214 "results": [ 00:27:13.214 { 00:27:13.214 "job": "Nvme0n1", 00:27:13.214 "core_mask": "0x4", 00:27:13.214 "workload": "verify", 00:27:13.214 "status": "terminated", 00:27:13.214 "verify_range": { 00:27:13.214 "start": 0, 00:27:13.214 "length": 16384 00:27:13.214 }, 00:27:13.214 "queue_depth": 128, 00:27:13.214 "io_size": 4096, 00:27:13.214 "runtime": 28.985291, 00:27:13.214 "iops": 12773.858299369842, 00:27:13.214 "mibps": 49.897883981913445, 00:27:13.214 "io_failed": 0, 00:27:13.214 "io_timeout": 0, 00:27:13.214 "avg_latency_us": 10000.403424676033, 00:27:13.214 "min_latency_us": 77.98153846153846, 00:27:13.214 "max_latency_us": 3019898.88 00:27:13.214 } 00:27:13.214 ], 00:27:13.214 "core_count": 1 00:27:13.214 } 00:27:13.477 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 109034 00:27:13.477 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:27:13.477 [2024-11-27 06:29:01.742135] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:27:13.478 [2024-11-27 06:29:01.742257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109034 ] 00:27:13.478 [2024-11-27 06:29:01.864803] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:13.478 [2024-11-27 06:29:01.885807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.478 [2024-11-27 06:29:01.918157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:13.478 Running I/O for 90 seconds... 00:27:13.478 12961.00 IOPS, 50.63 MiB/s [2024-11-27T06:29:33.008Z] 13043.00 IOPS, 50.95 MiB/s [2024-11-27T06:29:33.008Z] 13136.33 IOPS, 51.31 MiB/s [2024-11-27T06:29:33.008Z] 13136.75 IOPS, 51.32 MiB/s [2024-11-27T06:29:33.008Z] 13119.00 IOPS, 51.25 MiB/s [2024-11-27T06:29:33.008Z] 13064.00 IOPS, 51.03 MiB/s [2024-11-27T06:29:33.008Z] 13046.71 IOPS, 50.96 MiB/s [2024-11-27T06:29:33.008Z] 13026.75 IOPS, 50.89 MiB/s [2024-11-27T06:29:33.008Z] 13041.11 IOPS, 50.94 MiB/s [2024-11-27T06:29:33.008Z] 13058.20 IOPS, 51.01 MiB/s [2024-11-27T06:29:33.008Z] 13081.27 IOPS, 51.10 MiB/s [2024-11-27T06:29:33.008Z] 13099.67 IOPS, 51.17 MiB/s [2024-11-27T06:29:33.008Z] [2024-11-27 06:29:16.606089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.478 [2024-11-27 06:29:16.606135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:13.478 [2024-11-27 06:29:16.606168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.478 [2024-11-27 06:29:16.606176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:13.478 [2024-11-27 06:29:16.606189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.478 [2024-11-27 06:29:16.606196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:13.478 [2024-11-27 06:29:16.606208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.478 [2024-11-27 06:29:16.606215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:13.478 [2024-11-27 06:29:16.606227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.478 [2024-11-27 06:29:16.606233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:13.478 [2024-11-27 06:29:16.606245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.478 [2024-11-27 06:29:16.606251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:13.478 [2024-11-27 06:29:16.606263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.478 [2024-11-27 06:29:16.606270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:13.478 [2024-11-27 06:29:16.606281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.478 [2024-11-27 06:29:16.606289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:13.478 [2024-11-27 06:29:16.606300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.478 [2024-11-27 06:29:16.606307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:13.478 [2024-11-27 06:29:16.606339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.478 [2024-11-27 06:29:16.606346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:13.478 [2024-11-27 06:29:16.606358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.478 [2024-11-27 06:29:16.606365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:13.478 [2024-11-27 06:29:16.606377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.478 [2024-11-27 06:29:16.606384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.478 [2024-11-27 06:29:16.606396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.478 [2024-11-27 06:29:16.606402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.478 [2024-11-27 06:29:16.606414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.478 [2024-11-27 06:29:16.606420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:13.478 [2024-11-27 06:29:16.606432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.478 [2024-11-27 06:29:16.606439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:13.478 [2024-11-27 06:29:16.606451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.478 [2024-11-27 06:29:16.606458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:13.478 [2024-11-27 06:29:16.606812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.478 [2024-11-27 06:29:16.606825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:13.478 [2024-11-27 06:29:16.606840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.478 [2024-11-27 06:29:16.606848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:13.478 [2024-11-27 06:29:16.606861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.478 [2024-11-27 06:29:16.606868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:13.478 [2024-11-27 06:29:16.606880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.478 [2024-11-27 06:29:16.606887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:13.478 [2024-11-27 06:29:16.606900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.478 [2024-11-27 06:29:16.606917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:13.478 [2024-11-27 06:29:16.606938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.478 [2024-11-27 06:29:16.606946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:13.478 [2024-11-27 06:29:16.606959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.478 [2024-11-27 06:29:16.606966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:13.478 [2024-11-27 06:29:16.606980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.478 [2024-11-27 06:29:16.606987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:13.478 [2024-11-27 06:29:16.607026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.478 [2024-11-27 06:29:16.607035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:13.478 [2024-11-27 06:29:16.607049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.478 [2024-11-27 06:29:16.607056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:13.478 [2024-11-27 06:29:16.607070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.478 [2024-11-27 06:29:16.607077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:13.478 [2024-11-27 06:29:16.607090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.478 [2024-11-27 06:29:16.607097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:13.478 [2024-11-27 06:29:16.607111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.478 [2024-11-27 06:29:16.607118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:13.478 [2024-11-27 06:29:16.607131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.478 [2024-11-27 06:29:16.607138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:13.478 [2024-11-27 06:29:16.607151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.478 [2024-11-27 06:29:16.607159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:13.478 [2024-11-27 06:29:16.607171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.478 [2024-11-27 06:29:16.607178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:13.478 [2024-11-27 06:29:16.607191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:41232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.478 [2024-11-27 06:29:16.607198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:13.478 [2024-11-27 06:29:16.607211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:41240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.479 [2024-11-27 06:29:16.607223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.607236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.479 [2024-11-27 06:29:16.607243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.607256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:41256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.479 [2024-11-27 06:29:16.607263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.607276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:41264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.479 [2024-11-27 06:29:16.607283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.607296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:41272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.479 [2024-11-27 06:29:16.607303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.607316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.479 [2024-11-27 06:29:16.607323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.607336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:41288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.479 [2024-11-27 06:29:16.607343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.607356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.479 [2024-11-27 06:29:16.607363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.607375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:41304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.479 [2024-11-27 06:29:16.607382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.607395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.479 [2024-11-27 06:29:16.607402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.607415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.479 [2024-11-27 06:29:16.607422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.607434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.479 [2024-11-27 06:29:16.607441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.607454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.479 [2024-11-27 06:29:16.607464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.607477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.479 [2024-11-27 06:29:16.607484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.607497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.479 [2024-11-27 06:29:16.607504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.607517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.479 [2024-11-27 06:29:16.607524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.607537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:41744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.479 [2024-11-27 06:29:16.607544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.607557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.479 [2024-11-27 06:29:16.607564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.607577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:41760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.479 [2024-11-27 06:29:16.607584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.607597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:41768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.479 [2024-11-27 06:29:16.607604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.607617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.479 [2024-11-27 06:29:16.607624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.607636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.479 [2024-11-27 06:29:16.607644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.607657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.479 [2024-11-27 06:29:16.607663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.607677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.479 [2024-11-27 06:29:16.607684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.607696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.479 [2024-11-27 06:29:16.607703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.607719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:41816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.479 [2024-11-27 06:29:16.607726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.607739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:41824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.479 [2024-11-27 06:29:16.607746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.607759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.479 [2024-11-27 06:29:16.607766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.607779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.479 [2024-11-27 06:29:16.607786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.607798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:41848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.479 [2024-11-27 06:29:16.607805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.607818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.479 [2024-11-27 06:29:16.607825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.607837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.479 [2024-11-27 06:29:16.607845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.607858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:41872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.479 [2024-11-27 06:29:16.607865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.607878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.479 [2024-11-27 06:29:16.607885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.607899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.479 [2024-11-27 06:29:16.607914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.607927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.479 [2024-11-27 06:29:16.607934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.607947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:41904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.479 [2024-11-27 06:29:16.607954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.607970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.479 [2024-11-27 06:29:16.607977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.607990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.479 [2024-11-27 06:29:16.608002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:13.479 [2024-11-27 06:29:16.608016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.480 [2024-11-27 06:29:16.608023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:13.480 [2024-11-27 06:29:16.608035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.480 [2024-11-27 06:29:16.608042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:13.480 [2024-11-27 06:29:16.608055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.480 [2024-11-27 06:29:16.608062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:13.480 [2024-11-27 06:29:16.608177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.480 [2024-11-27 06:29:16.608186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.480 [2024-11-27 06:29:16.608203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.480 [2024-11-27 06:29:16.608209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.480 [2024-11-27 06:29:16.608225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:41968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.480 [2024-11-27 06:29:16.608232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:13.480 [2024-11-27 06:29:16.608248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.480 [2024-11-27 06:29:16.608255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:13.480 [2024-11-27 06:29:16.608270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.480 [2024-11-27 06:29:16.608277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:13.480 [2024-11-27 06:29:16.608293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.480 [2024-11-27 06:29:16.608300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:13.480 [2024-11-27 06:29:16.608317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:42000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.480 [2024-11-27 06:29:16.608324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:13.480 [2024-11-27 06:29:16.608339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:42008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.480 [2024-11-27 06:29:16.608350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:13.480 [2024-11-27 06:29:16.608366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:42016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.480 [2024-11-27 06:29:16.608373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:13.480 [2024-11-27 06:29:16.608390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:42024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.480 [2024-11-27 06:29:16.608397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:13.480 [2024-11-27 06:29:16.608413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:42032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.480 [2024-11-27 06:29:16.608420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:13.480 [2024-11-27 06:29:16.608436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:42040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.480 [2024-11-27 06:29:16.608443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:13.480 [2024-11-27 06:29:16.608459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:42048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.480 [2024-11-27 06:29:16.608468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:13.480 [2024-11-27 06:29:16.608484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:42056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.480 [2024-11-27 06:29:16.608490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:13.480 [2024-11-27 06:29:16.608507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:42064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.480 [2024-11-27 06:29:16.608513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:13.480 [2024-11-27 06:29:16.608529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:42072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.480 [2024-11-27 06:29:16.608536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:13.480 [2024-11-27 06:29:16.608552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:42080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.480 [2024-11-27 06:29:16.608575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:13.480 [2024-11-27 06:29:16.608591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:42088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.480 [2024-11-27 06:29:16.608598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:13.480 [2024-11-27 06:29:16.608614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:42096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.480 [2024-11-27 06:29:16.608621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:13.480 [2024-11-27 06:29:16.608637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:42104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.480 [2024-11-27 06:29:16.608648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:13.480 [2024-11-27 06:29:16.608666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:42112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.480 [2024-11-27 06:29:16.608673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:13.480 [2024-11-27 06:29:16.608688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:42120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.480 [2024-11-27 06:29:16.608695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:13.480 [2024-11-27 06:29:16.608711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:42128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.480 [2024-11-27 06:29:16.608718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:13.480 [2024-11-27 06:29:16.608737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:42136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.480 [2024-11-27 06:29:16.608744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:13.480 [2024-11-27 06:29:16.608760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:42144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.480 [2024-11-27 06:29:16.608767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:13.480 [2024-11-27 06:29:16.608783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:42152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.480 [2024-11-27 06:29:16.608790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:13.480 [2024-11-27 06:29:16.608806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:42160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.480 [2024-11-27 06:29:16.608813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:13.480 [2024-11-27 06:29:16.608829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:42168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.480 [2024-11-27 06:29:16.608836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:13.480 [2024-11-27 06:29:16.608852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:42176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.480 [2024-11-27 06:29:16.608860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.480 [2024-11-27 06:29:16.608876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:42184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.480 [2024-11-27 06:29:16.608883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:13.480 [2024-11-27 06:29:16.608898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:42192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.480 [2024-11-27 06:29:16.608914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:13.480 [2024-11-27 06:29:16.608930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:42200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.480 [2024-11-27 06:29:16.608937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.480 [2024-11-27 06:29:16.608957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.480 [2024-11-27 06:29:16.608964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.480 [2024-11-27 06:29:16.608980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:42216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.480 [2024-11-27 06:29:16.608986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.480 [2024-11-27 06:29:16.609002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:42224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.480 [2024-11-27 06:29:16.609009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:13.481 [2024-11-27 06:29:16.609025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.481 [2024-11-27 06:29:16.609032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:13.481 [2024-11-27 06:29:16.609048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:42240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.481 [2024-11-27 06:29:16.609055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:13.481 [2024-11-27 06:29:16.609071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:41312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.481 [2024-11-27 06:29:16.609078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:13.481 [2024-11-27 06:29:16.609094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:41320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.481 [2024-11-27 06:29:16.609100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:13.481 [2024-11-27 06:29:16.609119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:41328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.481 [2024-11-27 06:29:16.609126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:13.481 [2024-11-27 06:29:16.609142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:41336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.481 [2024-11-27 06:29:16.609149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:13.481 [2024-11-27 06:29:16.609165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:41344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.481 [2024-11-27 06:29:16.609172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:13.481 [2024-11-27 06:29:16.609188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:41352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.481 [2024-11-27 06:29:16.609195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:13.481 [2024-11-27 06:29:16.609211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:41360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.481 [2024-11-27 06:29:16.609217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:13.481 [2024-11-27 06:29:16.609237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:41368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.481 [2024-11-27 06:29:16.609245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:13.481 [2024-11-27 06:29:16.609261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:41376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.481 [2024-11-27 06:29:16.609268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:13.481 [2024-11-27 06:29:16.609284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:41384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.481 [2024-11-27 06:29:16.609291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:13.481 [2024-11-27 06:29:16.609307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:41392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.481 [2024-11-27 06:29:16.609314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:13.481 [2024-11-27 06:29:16.609330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:41400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.481 [2024-11-27 06:29:16.609337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:13.481 [2024-11-27 06:29:16.609353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:41408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.481 [2024-11-27 06:29:16.609360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:13.481 [2024-11-27 06:29:16.609375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:41416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.481 [2024-11-27 06:29:16.609383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:13.481 [2024-11-27 06:29:16.609398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:41424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.481 [2024-11-27 06:29:16.609406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:13.481 12999.54 IOPS, 50.78 MiB/s [2024-11-27T06:29:33.011Z] 12071.00 IOPS, 47.15 MiB/s [2024-11-27T06:29:33.011Z] 11266.27 IOPS, 44.01 MiB/s [2024-11-27T06:29:33.011Z] 10649.56 IOPS, 41.60 MiB/s [2024-11-27T06:29:33.011Z] 10786.88 IOPS, 42.14 MiB/s [2024-11-27T06:29:33.011Z] 10901.56 IOPS, 42.58 MiB/s [2024-11-27T06:29:33.011Z] 11173.89 IOPS, 43.65 MiB/s [2024-11-27T06:29:33.011Z] 11551.60 IOPS, 45.12 MiB/s [2024-11-27T06:29:33.011Z] 11899.52 IOPS, 46.48 MiB/s [2024-11-27T06:29:33.011Z] 11979.73 IOPS, 46.80 MiB/s [2024-11-27T06:29:33.011Z] 12026.22 IOPS, 46.98 MiB/s [2024-11-27T06:29:33.011Z] 12080.17 IOPS, 47.19 MiB/s [2024-11-27T06:29:33.011Z] 12336.20 IOPS, 48.19 MiB/s [2024-11-27T06:29:33.011Z] 12584.69 IOPS, 49.16 MiB/s [2024-11-27T06:29:33.011Z] [2024-11-27 06:29:30.384404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.481 [2024-11-27 06:29:30.384452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:13.481 [2024-11-27 06:29:30.384483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.481 [2024-11-27 06:29:30.384491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:13.481 [2024-11-27 06:29:30.384504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.481 [2024-11-27 06:29:30.384511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:13.481 [2024-11-27 06:29:30.384803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.481 [2024-11-27 06:29:30.384814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:13.481 [2024-11-27 06:29:30.384826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:93648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.481 [2024-11-27 06:29:30.384832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:13.481 [2024-11-27 06:29:30.384845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.481 [2024-11-27 06:29:30.384851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:13.481 [2024-11-27 06:29:30.384863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:93712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.481 [2024-11-27 06:29:30.384869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:13.481 [2024-11-27 06:29:30.384881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.481 [2024-11-27 06:29:30.384887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:13.481 [2024-11-27 06:29:30.384899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.481 [2024-11-27 06:29:30.384916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:13.481 [2024-11-27 06:29:30.384928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.481 [2024-11-27 06:29:30.384935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:13.481 [2024-11-27 06:29:30.384947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.481 [2024-11-27 06:29:30.384953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:13.481 [2024-11-27 06:29:30.384965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.481 [2024-11-27 06:29:30.384971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:13.481 [2024-11-27 06:29:30.384983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.481 [2024-11-27 06:29:30.384990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:13.481 [2024-11-27 06:29:30.385001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.481 [2024-11-27 06:29:30.385008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:13.481 [2024-11-27 06:29:30.385020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.481 [2024-11-27 06:29:30.385026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:13.481 [2024-11-27 06:29:30.385044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.481 [2024-11-27 06:29:30.385051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:13.481 [2024-11-27 06:29:30.385064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.481 [2024-11-27 06:29:30.385071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.481 [2024-11-27 06:29:30.385084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.481 [2024-11-27 06:29:30.385091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.481 [2024-11-27 06:29:30.385103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.482 [2024-11-27 06:29:30.385111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.482 [2024-11-27 06:29:30.385123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.482 [2024-11-27 06:29:30.385129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:13.482 [2024-11-27 06:29:30.385141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.482 [2024-11-27 06:29:30.385148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:13.482 [2024-11-27 06:29:30.385160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.482 [2024-11-27 06:29:30.385167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:13.482 [2024-11-27 06:29:30.386112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.482 [2024-11-27 06:29:30.386132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:13.482 [2024-11-27 06:29:30.386146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.482 [2024-11-27 06:29:30.386153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:13.482 [2024-11-27 06:29:30.386166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.482 [2024-11-27 06:29:30.386173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:13.482 [2024-11-27 06:29:30.386185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.482 [2024-11-27 06:29:30.386191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:13.482 [2024-11-27 06:29:30.386204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.482 [2024-11-27 06:29:30.386210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:13.482 [2024-11-27 06:29:30.386222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.482 [2024-11-27 06:29:30.386237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:13.482 [2024-11-27 06:29:30.386249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.482 [2024-11-27 06:29:30.386256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:13.482 [2024-11-27 06:29:30.386268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.482 [2024-11-27 06:29:30.386275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:13.482 [2024-11-27 06:29:30.386287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.482 [2024-11-27 06:29:30.386294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:13.482 [2024-11-27 06:29:30.386305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.482 [2024-11-27 06:29:30.386312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:13.482 [2024-11-27 06:29:30.386324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.482 [2024-11-27 06:29:30.386331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:13.482 [2024-11-27 06:29:30.386343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.482 [2024-11-27 06:29:30.386350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:13.482 [2024-11-27 06:29:30.386362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.482 [2024-11-27 06:29:30.386368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:13.482 [2024-11-27 06:29:30.386380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.482 [2024-11-27 06:29:30.386387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:13.482 [2024-11-27 06:29:30.386399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.482 [2024-11-27 06:29:30.386405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:13.482 [2024-11-27 06:29:30.386417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.482 [2024-11-27 06:29:30.386424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:13.482 [2024-11-27 06:29:30.386436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.482 [2024-11-27 06:29:30.386443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:13.482 [2024-11-27 06:29:30.386455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.482 [2024-11-27 06:29:30.386465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:13.482 [2024-11-27 06:29:30.386477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.482 [2024-11-27 06:29:30.386484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:13.482 [2024-11-27 06:29:30.386496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.482 [2024-11-27 06:29:30.386503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:13.482 [2024-11-27 06:29:30.386515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.482 [2024-11-27 06:29:30.386522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:13.482 [2024-11-27 06:29:30.386534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.482 [2024-11-27 06:29:30.386541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:13.482 [2024-11-27 06:29:30.386553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.482 [2024-11-27 06:29:30.386560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:13.482 [2024-11-27 06:29:30.386572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.482 [2024-11-27 06:29:30.386578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:13.482 [2024-11-27 06:29:30.386590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.482 [2024-11-27 06:29:30.386597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:13.482 [2024-11-27 06:29:30.386609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.482 [2024-11-27 06:29:30.386615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:13.482 [2024-11-27 06:29:30.386627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.482 [2024-11-27 06:29:30.386634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:13.482 [2024-11-27 06:29:30.386646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.482 [2024-11-27 06:29:30.386653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.482 [2024-11-27 06:29:30.386665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.482 [2024-11-27 06:29:30.386672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.483 [2024-11-27 06:29:30.386684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.483 [2024-11-27 06:29:30.386690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:13.483 [2024-11-27 06:29:30.386706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.483 [2024-11-27 06:29:30.386713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:13.483 [2024-11-27 06:29:30.386725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.483 [2024-11-27 06:29:30.386732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:13.483 [2024-11-27 06:29:30.386743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.483 [2024-11-27 06:29:30.386750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:13.483 [2024-11-27 06:29:30.386762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.483 [2024-11-27 06:29:30.386769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:13.483 [2024-11-27 06:29:30.386781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.483 [2024-11-27 06:29:30.386788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:13.483 [2024-11-27 06:29:30.386800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.483 [2024-11-27 06:29:30.386807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:13.483 12735.96 IOPS, 49.75 MiB/s [2024-11-27T06:29:33.013Z] 12756.43 IOPS, 49.83 MiB/s [2024-11-27T06:29:33.013Z] Received shutdown signal, test time was about 28.985919 seconds 00:27:13.483 00:27:13.483 Latency(us) 00:27:13.483 [2024-11-27T06:29:33.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:13.483 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:13.483 Verification LBA range: start 0x0 length 0x4000 00:27:13.483 Nvme0n1 : 28.99 12773.86 49.90 0.00 0.00 10000.40 77.98 3019898.88 00:27:13.483 [2024-11-27T06:29:33.013Z] =================================================================================================================== 00:27:13.483 [2024-11-27T06:29:33.013Z] Total : 12773.86 49.90 0.00 0.00 10000.40 77.98 3019898.88 00:27:13.483 06:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:13.743 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:27:13.744 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:27:13.744 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:27:13.744 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:13.744 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:27:13.744 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:13.744 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:27:13.744 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:13.744 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:13.744 rmmod nvme_tcp 00:27:13.744 rmmod nvme_fabrics 00:27:13.744 rmmod nvme_keyring 00:27:13.744 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:13.744 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:27:13.744 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:27:13.744 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 108937 ']' 00:27:13.744 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 108937 00:27:13.744 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 108937 ']' 00:27:13.744 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 108937 00:27:13.744 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:27:13.744 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:13.744 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108937 00:27:13.744 killing process with pid 108937 00:27:13.744 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:13.744 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:13.744 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108937' 00:27:13.744 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 108937 00:27:13.744 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 108937 00:27:13.744 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:13.744 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:13.744 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:13.744 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:27:13.744 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:27:13.744 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:13.744 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:27:13.744 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:13.744 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:13.744 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:13.744 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:13.744 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:14.005 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:14.005 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:14.005 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:14.005 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:14.005 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:14.005 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:14.005 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:14.005 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:14.005 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:14.005 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:14.005 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:14.005 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.005 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:14.005 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.005 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:27:14.005 00:27:14.005 real 0m34.725s 00:27:14.005 user 1m52.884s 00:27:14.005 sys 0m6.584s 00:27:14.005 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:14.005 06:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:14.005 ************************************ 00:27:14.005 END TEST nvmf_host_multipath_status 00:27:14.005 ************************************ 00:27:14.005 06:29:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:14.005 06:29:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:14.005 06:29:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:14.005 06:29:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.005 ************************************ 00:27:14.005 START TEST nvmf_discovery_remove_ifc 00:27:14.005 ************************************ 00:27:14.005 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:14.268 * Looking for test storage... 00:27:14.268 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:14.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.268 --rc genhtml_branch_coverage=1 00:27:14.268 --rc genhtml_function_coverage=1 00:27:14.268 --rc genhtml_legend=1 00:27:14.268 --rc geninfo_all_blocks=1 00:27:14.268 --rc geninfo_unexecuted_blocks=1 00:27:14.268 00:27:14.268 ' 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:14.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.268 --rc genhtml_branch_coverage=1 00:27:14.268 --rc genhtml_function_coverage=1 00:27:14.268 --rc genhtml_legend=1 00:27:14.268 --rc geninfo_all_blocks=1 00:27:14.268 --rc geninfo_unexecuted_blocks=1 00:27:14.268 00:27:14.268 ' 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:14.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.268 --rc genhtml_branch_coverage=1 00:27:14.268 --rc genhtml_function_coverage=1 00:27:14.268 --rc genhtml_legend=1 00:27:14.268 --rc geninfo_all_blocks=1 00:27:14.268 --rc geninfo_unexecuted_blocks=1 00:27:14.268 00:27:14.268 ' 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:14.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.268 --rc genhtml_branch_coverage=1 00:27:14.268 --rc genhtml_function_coverage=1 00:27:14.268 --rc genhtml_legend=1 00:27:14.268 --rc geninfo_all_blocks=1 00:27:14.268 --rc geninfo_unexecuted_blocks=1 00:27:14.268 00:27:14.268 ' 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:14.268 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:14.269 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:14.269 Cannot find device "nvmf_init_br" 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:27:14.269 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:14.269 Cannot find device "nvmf_init_br2" 00:27:14.270 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:27:14.270 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:14.270 Cannot find device "nvmf_tgt_br" 00:27:14.270 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:27:14.270 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:14.270 Cannot find device "nvmf_tgt_br2" 00:27:14.270 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:27:14.270 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:14.270 Cannot find device "nvmf_init_br" 00:27:14.270 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:27:14.270 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:14.270 Cannot find device "nvmf_init_br2" 00:27:14.270 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:27:14.270 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:14.270 Cannot find device "nvmf_tgt_br" 00:27:14.270 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:27:14.270 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:14.270 Cannot find device "nvmf_tgt_br2" 00:27:14.270 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:27:14.270 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:14.270 Cannot find device "nvmf_br" 00:27:14.270 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:27:14.270 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:14.270 Cannot find device "nvmf_init_if" 00:27:14.270 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:27:14.270 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:14.270 Cannot find device "nvmf_init_if2" 00:27:14.270 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:27:14.270 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:14.270 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:14.270 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:27:14.270 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:14.270 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:14.270 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:27:14.270 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:14.270 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:14.270 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:14.532 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:14.532 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:27:14.532 00:27:14.532 --- 10.0.0.3 ping statistics --- 00:27:14.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.532 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:14.532 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:14.532 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.025 ms 00:27:14.532 00:27:14.532 --- 10.0.0.4 ping statistics --- 00:27:14.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.532 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:14.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:14.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:27:14.532 00:27:14.532 --- 10.0.0.1 ping statistics --- 00:27:14.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.532 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:14.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:14.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.037 ms 00:27:14.532 00:27:14.532 --- 10.0.0.2 ping statistics --- 00:27:14.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.532 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:14.532 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:14.533 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:14.533 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:14.533 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:14.533 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=110351 00:27:14.533 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 110351 00:27:14.533 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 110351 ']' 00:27:14.533 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:14.533 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:14.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:14.533 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:14.533 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:14.533 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:14.533 06:29:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:14.533 [2024-11-27 06:29:33.994914] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:27:14.533 [2024-11-27 06:29:33.994968] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:14.794 [2024-11-27 06:29:34.110208] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:14.794 [2024-11-27 06:29:34.135971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.794 [2024-11-27 06:29:34.152387] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:14.794 [2024-11-27 06:29:34.152506] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:14.794 [2024-11-27 06:29:34.152547] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:14.794 [2024-11-27 06:29:34.152579] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:14.794 [2024-11-27 06:29:34.152621] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:14.794 [2024-11-27 06:29:34.152854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:15.365 06:29:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:15.365 06:29:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:27:15.365 06:29:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:15.365 06:29:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:15.365 06:29:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:15.626 06:29:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:15.626 06:29:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:15.626 06:29:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.626 06:29:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:15.626 [2024-11-27 06:29:34.939178] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:15.626 [2024-11-27 06:29:34.947255] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:27:15.626 null0 00:27:15.626 [2024-11-27 06:29:34.979209] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:15.626 06:29:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.626 06:29:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=110401 00:27:15.626 06:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 110401 /tmp/host.sock 00:27:15.626 06:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 110401 ']' 00:27:15.626 06:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:27:15.626 06:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:15.626 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:15.626 06:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:15.626 06:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:15.626 06:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:15.626 06:29:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:15.626 [2024-11-27 06:29:35.042149] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:27:15.626 [2024-11-27 06:29:35.042203] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110401 ] 00:27:15.888 [2024-11-27 06:29:35.156327] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:15.888 [2024-11-27 06:29:35.185331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.888 [2024-11-27 06:29:35.204034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.461 06:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:16.461 06:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:27:16.461 06:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:16.461 06:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:16.461 06:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.461 06:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:16.461 06:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.461 06:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:16.461 06:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.461 06:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:16.461 06:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.461 06:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:16.461 06:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.461 06:29:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:17.848 [2024-11-27 06:29:37.001773] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:27:17.848 [2024-11-27 06:29:37.001801] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:27:17.848 [2024-11-27 06:29:37.001815] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:17.848 [2024-11-27 06:29:37.087875] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:27:17.848 [2024-11-27 06:29:37.142228] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:27:17.848 [2024-11-27 06:29:37.142915] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1629950:1 started. 00:27:17.848 [2024-11-27 06:29:37.144395] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:17.848 [2024-11-27 06:29:37.144441] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:17.848 [2024-11-27 06:29:37.144459] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:17.848 [2024-11-27 06:29:37.144473] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:27:17.848 [2024-11-27 06:29:37.144492] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:27:17.848 06:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.848 06:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:17.848 06:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:17.848 06:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:17.848 06:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.848 06:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:17.848 06:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:17.848 06:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:17.848 [2024-11-27 06:29:37.150495] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1629950 was disconnected and freed. delete nvme_qpair. 00:27:17.848 06:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:17.848 06:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.848 06:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:17.848 06:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:27:17.848 06:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:27:17.848 06:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:17.848 06:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:17.848 06:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:17.848 06:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:17.848 06:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:17.848 06:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:17.848 06:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.848 06:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:17.848 06:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.848 06:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:17.848 06:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:18.793 06:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:18.793 06:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:18.793 06:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.793 06:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:18.793 06:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:18.793 06:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:18.793 06:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:18.793 06:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.793 06:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:18.793 06:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:19.821 06:29:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:19.821 06:29:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:19.821 06:29:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:19.821 06:29:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:19.822 06:29:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:19.822 06:29:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.822 06:29:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:19.822 06:29:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.822 06:29:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:19.822 06:29:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:21.210 06:29:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:21.210 06:29:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:21.210 06:29:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:21.210 06:29:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.210 06:29:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:21.210 06:29:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:21.210 06:29:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:21.210 06:29:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.210 06:29:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:21.210 06:29:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:22.152 06:29:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:22.152 06:29:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:22.152 06:29:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:22.152 06:29:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.152 06:29:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:22.152 06:29:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:22.152 06:29:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:22.152 06:29:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.152 06:29:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:22.152 06:29:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:23.096 06:29:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:23.096 06:29:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:23.096 06:29:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:23.096 06:29:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:23.096 06:29:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:23.096 06:29:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.096 06:29:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:23.096 06:29:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.096 06:29:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:23.096 06:29:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:23.096 [2024-11-27 06:29:42.573254] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:23.096 [2024-11-27 06:29:42.573295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:23.096 [2024-11-27 06:29:42.573304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.096 [2024-11-27 06:29:42.573311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:23.096 [2024-11-27 06:29:42.573316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.096 [2024-11-27 06:29:42.573320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:23.096 [2024-11-27 06:29:42.573325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.096 [2024-11-27 06:29:42.573330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:23.096 [2024-11-27 06:29:42.573334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.096 [2024-11-27 06:29:42.573339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:23.096 [2024-11-27 06:29:42.573344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.096 [2024-11-27 06:29:42.573349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1606450 is same with the state(6) to be set 00:27:23.096 [2024-11-27 06:29:42.583251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1606450 (9): Bad file descriptor 00:27:23.096 [2024-11-27 06:29:42.593261] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:23.096 [2024-11-27 06:29:42.593274] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:23.096 [2024-11-27 06:29:42.593276] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:23.096 [2024-11-27 06:29:42.593279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:23.096 [2024-11-27 06:29:42.593298] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:24.038 06:29:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:24.038 06:29:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:24.038 06:29:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:24.038 06:29:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:24.038 06:29:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.038 06:29:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:24.038 06:29:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:24.299 [2024-11-27 06:29:43.595973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:24.299 [2024-11-27 06:29:43.596085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1606450 with addr=10.0.0.3, port=4420 00:27:24.299 [2024-11-27 06:29:43.596115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1606450 is same with the state(6) to be set 00:27:24.299 [2024-11-27 06:29:43.596175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1606450 (9): Bad file descriptor 00:27:24.299 [2024-11-27 06:29:43.597345] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:27:24.299 [2024-11-27 06:29:43.597436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:24.299 [2024-11-27 06:29:43.597464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:24.299 [2024-11-27 06:29:43.597484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:24.299 [2024-11-27 06:29:43.597502] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:24.299 [2024-11-27 06:29:43.597514] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:24.299 [2024-11-27 06:29:43.597524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:24.299 [2024-11-27 06:29:43.597550] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:24.299 [2024-11-27 06:29:43.597561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:24.299 06:29:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.299 06:29:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:24.299 06:29:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:25.240 [2024-11-27 06:29:44.597645] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:25.240 [2024-11-27 06:29:44.597677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:25.240 [2024-11-27 06:29:44.597694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:25.240 [2024-11-27 06:29:44.597700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:25.240 [2024-11-27 06:29:44.597706] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:27:25.240 [2024-11-27 06:29:44.597711] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:25.240 [2024-11-27 06:29:44.597714] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:25.240 [2024-11-27 06:29:44.597717] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:25.240 [2024-11-27 06:29:44.597737] bdev_nvme.c:7235:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:27:25.240 [2024-11-27 06:29:44.597771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:25.240 [2024-11-27 06:29:44.597779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.240 [2024-11-27 06:29:44.597787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:25.240 [2024-11-27 06:29:44.597791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.240 [2024-11-27 06:29:44.597797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:25.240 [2024-11-27 06:29:44.597801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.240 [2024-11-27 06:29:44.597806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:25.240 [2024-11-27 06:29:44.597810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.240 [2024-11-27 06:29:44.597816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:25.240 [2024-11-27 06:29:44.597819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:25.240 [2024-11-27 06:29:44.597824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:27:25.240 [2024-11-27 06:29:44.598197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f5ba0 (9): Bad file descriptor 00:27:25.240 [2024-11-27 06:29:44.599205] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:25.240 [2024-11-27 06:29:44.599219] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:27:25.241 06:29:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:25.241 06:29:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:25.241 06:29:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:25.241 06:29:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:25.241 06:29:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:25.241 06:29:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.241 06:29:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:25.241 06:29:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.241 06:29:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:25.241 06:29:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:25.241 06:29:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:25.241 06:29:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:25.241 06:29:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:25.241 06:29:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:25.241 06:29:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:25.241 06:29:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.241 06:29:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:25.241 06:29:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:25.241 06:29:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:25.241 06:29:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.241 06:29:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:25.241 06:29:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:26.186 06:29:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:26.186 06:29:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:26.186 06:29:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:26.186 06:29:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.186 06:29:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:26.186 06:29:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:26.186 06:29:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:26.186 06:29:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.447 06:29:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:26.447 06:29:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:27.391 [2024-11-27 06:29:46.602750] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:27:27.391 [2024-11-27 06:29:46.602770] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:27:27.391 [2024-11-27 06:29:46.602780] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:27.391 [2024-11-27 06:29:46.688828] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:27:27.391 06:29:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:27.391 06:29:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:27.391 06:29:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:27.391 06:29:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.391 06:29:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:27.391 06:29:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:27.391 06:29:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:27.391 [2024-11-27 06:29:46.743068] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:27:27.391 [2024-11-27 06:29:46.743450] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x160fc90:1 started. 00:27:27.391 [2024-11-27 06:29:46.744353] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:27.391 [2024-11-27 06:29:46.744382] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:27.391 [2024-11-27 06:29:46.744395] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:27.391 [2024-11-27 06:29:46.744406] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:27:27.391 [2024-11-27 06:29:46.744411] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:27:27.391 06:29:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.391 [2024-11-27 06:29:46.751483] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x160fc90 was disconnected and freed. delete nvme_qpair. 00:27:27.391 06:29:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:27.391 06:29:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:28.439 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:28.440 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:28.440 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:28.440 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:28.440 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.440 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:28.440 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:28.440 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.440 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:28.440 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:28.440 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 110401 00:27:28.440 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 110401 ']' 00:27:28.440 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 110401 00:27:28.440 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:28.440 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:28.440 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110401 00:27:28.440 killing process with pid 110401 00:27:28.440 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:28.440 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:28.440 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110401' 00:27:28.440 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 110401 00:27:28.440 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 110401 00:27:28.440 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:28.440 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:28.440 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:27:28.714 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:28.714 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:27:28.714 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:28.714 06:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:28.714 rmmod nvme_tcp 00:27:28.714 rmmod nvme_fabrics 00:27:28.714 rmmod nvme_keyring 00:27:28.714 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:28.714 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:27:28.714 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:27:28.714 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 110351 ']' 00:27:28.714 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 110351 00:27:28.714 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 110351 ']' 00:27:28.714 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 110351 00:27:28.714 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:28.714 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:28.714 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110351 00:27:28.714 killing process with pid 110351 00:27:28.714 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:28.714 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:28.714 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110351' 00:27:28.714 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 110351 00:27:28.714 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 110351 00:27:28.714 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:28.714 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:28.714 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:28.714 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:27:28.714 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:28.714 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:27:28.714 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:27:28.714 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:28.714 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:28.714 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:28.714 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:28.714 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:28.714 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:28.714 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:28.714 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:28.714 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:28.714 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:28.714 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:28.976 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:28.976 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:28.976 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:28.976 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:28.976 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:28.976 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:28.976 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:28.976 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:28.976 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:27:28.976 ************************************ 00:27:28.976 END TEST nvmf_discovery_remove_ifc 00:27:28.976 ************************************ 00:27:28.976 00:27:28.976 real 0m14.862s 00:27:28.976 user 0m26.660s 00:27:28.976 sys 0m1.259s 00:27:28.976 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:28.976 06:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:28.976 06:29:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:28.976 06:29:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:28.976 06:29:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:28.976 06:29:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.976 ************************************ 00:27:28.976 START TEST nvmf_identify_kernel_target 00:27:28.976 ************************************ 00:27:28.976 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:28.976 * Looking for test storage... 00:27:28.976 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:28.976 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:28.976 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:27:29.237 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:29.237 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:29.237 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:29.237 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:29.237 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:29.237 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:27:29.237 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:27:29.237 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:27:29.237 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:27:29.237 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:27:29.237 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:29.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.238 --rc genhtml_branch_coverage=1 00:27:29.238 --rc genhtml_function_coverage=1 00:27:29.238 --rc genhtml_legend=1 00:27:29.238 --rc geninfo_all_blocks=1 00:27:29.238 --rc geninfo_unexecuted_blocks=1 00:27:29.238 00:27:29.238 ' 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:29.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.238 --rc genhtml_branch_coverage=1 00:27:29.238 --rc genhtml_function_coverage=1 00:27:29.238 --rc genhtml_legend=1 00:27:29.238 --rc geninfo_all_blocks=1 00:27:29.238 --rc geninfo_unexecuted_blocks=1 00:27:29.238 00:27:29.238 ' 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:29.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.238 --rc genhtml_branch_coverage=1 00:27:29.238 --rc genhtml_function_coverage=1 00:27:29.238 --rc genhtml_legend=1 00:27:29.238 --rc geninfo_all_blocks=1 00:27:29.238 --rc geninfo_unexecuted_blocks=1 00:27:29.238 00:27:29.238 ' 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:29.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.238 --rc genhtml_branch_coverage=1 00:27:29.238 --rc genhtml_function_coverage=1 00:27:29.238 --rc genhtml_legend=1 00:27:29.238 --rc geninfo_all_blocks=1 00:27:29.238 --rc geninfo_unexecuted_blocks=1 00:27:29.238 00:27:29.238 ' 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.238 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:29.239 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:29.239 Cannot find device "nvmf_init_br" 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:29.239 Cannot find device "nvmf_init_br2" 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:29.239 Cannot find device "nvmf_tgt_br" 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:29.239 Cannot find device "nvmf_tgt_br2" 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:29.239 Cannot find device "nvmf_init_br" 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:29.239 Cannot find device "nvmf_init_br2" 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:29.239 Cannot find device "nvmf_tgt_br" 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:29.239 Cannot find device "nvmf_tgt_br2" 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:29.239 Cannot find device "nvmf_br" 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:29.239 Cannot find device "nvmf_init_if" 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:29.239 Cannot find device "nvmf_init_if2" 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:29.239 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:29.239 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:29.239 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:29.240 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:29.240 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:29.240 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:29.240 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:29.499 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:29.499 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:27:29.499 00:27:29.499 --- 10.0.0.3 ping statistics --- 00:27:29.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.499 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:29.499 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:29.499 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.075 ms 00:27:29.499 00:27:29.499 --- 10.0.0.4 ping statistics --- 00:27:29.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.499 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:29.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:29.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:27:29.499 00:27:29.499 --- 10.0.0.1 ping statistics --- 00:27:29.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.499 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:29.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:29.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:27:29.499 00:27:29.499 --- 10.0.0.2 ping statistics --- 00:27:29.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.499 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:29.499 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:29.500 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:29.500 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:29.500 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:29.500 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:29.500 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:29.500 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:29.500 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:27:29.500 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:29.500 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:29.500 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:29.500 06:29:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:29.758 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:29.758 Waiting for block devices as requested 00:27:29.758 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:30.016 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:30.016 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:30.016 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:30.016 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:30.016 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:30.016 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:30.016 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:30.016 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:30.016 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:30.016 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:27:30.016 No valid GPT data, bailing 00:27:30.016 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:30.016 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:30.016 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:30.016 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:30.016 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:30.016 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:27:30.016 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:27:30.016 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:27:30.016 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:27:30.016 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:30.016 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:27:30.016 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:27:30.016 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:27:30.016 No valid GPT data, bailing 00:27:30.016 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:27:30.016 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:30.016 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:30.016 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:27:30.016 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:30.017 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:27:30.017 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:27:30.017 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:27:30.017 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:27:30.017 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:30.017 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:27:30.017 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:27:30.017 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:27:30.017 No valid GPT data, bailing 00:27:30.017 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:27:30.017 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:30.017 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:30.017 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:27:30.017 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:30.017 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:27:30.017 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:27:30.017 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:27:30.017 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:27:30.017 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:30.017 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:27:30.017 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:27:30.017 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:27:30.276 No valid GPT data, bailing 00:27:30.276 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:27:30.276 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:30.276 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:30.276 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:27:30.276 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:27:30.276 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:30.276 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:30.276 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:30.276 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:30.276 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:27:30.276 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:27:30.276 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:27:30.276 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:30.276 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:27:30.276 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:27:30.276 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:27:30.276 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:30.276 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -a 10.0.0.1 -t tcp -s 4420 00:27:30.276 00:27:30.276 Discovery Log Number of Records 2, Generation counter 2 00:27:30.276 =====Discovery Log Entry 0====== 00:27:30.276 trtype: tcp 00:27:30.276 adrfam: ipv4 00:27:30.276 subtype: current discovery subsystem 00:27:30.276 treq: not specified, sq flow control disable supported 00:27:30.276 portid: 1 00:27:30.276 trsvcid: 4420 00:27:30.276 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:30.276 traddr: 10.0.0.1 00:27:30.276 eflags: none 00:27:30.276 sectype: none 00:27:30.276 =====Discovery Log Entry 1====== 00:27:30.276 trtype: tcp 00:27:30.276 adrfam: ipv4 00:27:30.276 subtype: nvme subsystem 00:27:30.276 treq: not specified, sq flow control disable supported 00:27:30.276 portid: 1 00:27:30.276 trsvcid: 4420 00:27:30.276 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:30.276 traddr: 10.0.0.1 00:27:30.276 eflags: none 00:27:30.276 sectype: none 00:27:30.276 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:30.276 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:30.276 ===================================================== 00:27:30.276 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:30.276 ===================================================== 00:27:30.276 Controller Capabilities/Features 00:27:30.276 ================================ 00:27:30.276 Vendor ID: 0000 00:27:30.276 Subsystem Vendor ID: 0000 00:27:30.276 Serial Number: 63e3660db192ace7e7b0 00:27:30.276 Model Number: Linux 00:27:30.276 Firmware Version: 6.8.9-20 00:27:30.276 Recommended Arb Burst: 0 00:27:30.276 IEEE OUI Identifier: 00 00 00 00:27:30.276 Multi-path I/O 00:27:30.276 May have multiple subsystem ports: No 00:27:30.276 May have multiple controllers: No 00:27:30.276 Associated with SR-IOV VF: No 00:27:30.276 Max Data Transfer Size: Unlimited 00:27:30.276 Max Number of Namespaces: 0 00:27:30.276 Max Number of I/O Queues: 1024 00:27:30.276 NVMe Specification Version (VS): 1.3 00:27:30.276 NVMe Specification Version (Identify): 1.3 00:27:30.276 Maximum Queue Entries: 1024 00:27:30.276 Contiguous Queues Required: No 00:27:30.276 Arbitration Mechanisms Supported 00:27:30.276 Weighted Round Robin: Not Supported 00:27:30.276 Vendor Specific: Not Supported 00:27:30.276 Reset Timeout: 7500 ms 00:27:30.276 Doorbell Stride: 4 bytes 00:27:30.276 NVM Subsystem Reset: Not Supported 00:27:30.276 Command Sets Supported 00:27:30.276 NVM Command Set: Supported 00:27:30.276 Boot Partition: Not Supported 00:27:30.276 Memory Page Size Minimum: 4096 bytes 00:27:30.276 Memory Page Size Maximum: 4096 bytes 00:27:30.276 Persistent Memory Region: Not Supported 00:27:30.276 Optional Asynchronous Events Supported 00:27:30.276 Namespace Attribute Notices: Not Supported 00:27:30.276 Firmware Activation Notices: Not Supported 00:27:30.276 ANA Change Notices: Not Supported 00:27:30.276 PLE Aggregate Log Change Notices: Not Supported 00:27:30.276 LBA Status Info Alert Notices: Not Supported 00:27:30.276 EGE Aggregate Log Change Notices: Not Supported 00:27:30.276 Normal NVM Subsystem Shutdown event: Not Supported 00:27:30.276 Zone Descriptor Change Notices: Not Supported 00:27:30.276 Discovery Log Change Notices: Supported 00:27:30.276 Controller Attributes 00:27:30.276 128-bit Host Identifier: Not Supported 00:27:30.276 Non-Operational Permissive Mode: Not Supported 00:27:30.276 NVM Sets: Not Supported 00:27:30.276 Read Recovery Levels: Not Supported 00:27:30.276 Endurance Groups: Not Supported 00:27:30.276 Predictable Latency Mode: Not Supported 00:27:30.276 Traffic Based Keep ALive: Not Supported 00:27:30.276 Namespace Granularity: Not Supported 00:27:30.276 SQ Associations: Not Supported 00:27:30.276 UUID List: Not Supported 00:27:30.276 Multi-Domain Subsystem: Not Supported 00:27:30.276 Fixed Capacity Management: Not Supported 00:27:30.276 Variable Capacity Management: Not Supported 00:27:30.276 Delete Endurance Group: Not Supported 00:27:30.276 Delete NVM Set: Not Supported 00:27:30.276 Extended LBA Formats Supported: Not Supported 00:27:30.276 Flexible Data Placement Supported: Not Supported 00:27:30.276 00:27:30.276 Controller Memory Buffer Support 00:27:30.277 ================================ 00:27:30.277 Supported: No 00:27:30.277 00:27:30.277 Persistent Memory Region Support 00:27:30.277 ================================ 00:27:30.277 Supported: No 00:27:30.277 00:27:30.277 Admin Command Set Attributes 00:27:30.277 ============================ 00:27:30.277 Security Send/Receive: Not Supported 00:27:30.277 Format NVM: Not Supported 00:27:30.277 Firmware Activate/Download: Not Supported 00:27:30.277 Namespace Management: Not Supported 00:27:30.277 Device Self-Test: Not Supported 00:27:30.277 Directives: Not Supported 00:27:30.277 NVMe-MI: Not Supported 00:27:30.277 Virtualization Management: Not Supported 00:27:30.277 Doorbell Buffer Config: Not Supported 00:27:30.277 Get LBA Status Capability: Not Supported 00:27:30.277 Command & Feature Lockdown Capability: Not Supported 00:27:30.277 Abort Command Limit: 1 00:27:30.277 Async Event Request Limit: 1 00:27:30.277 Number of Firmware Slots: N/A 00:27:30.277 Firmware Slot 1 Read-Only: N/A 00:27:30.277 Firmware Activation Without Reset: N/A 00:27:30.277 Multiple Update Detection Support: N/A 00:27:30.277 Firmware Update Granularity: No Information Provided 00:27:30.277 Per-Namespace SMART Log: No 00:27:30.277 Asymmetric Namespace Access Log Page: Not Supported 00:27:30.277 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:30.277 Command Effects Log Page: Not Supported 00:27:30.277 Get Log Page Extended Data: Supported 00:27:30.277 Telemetry Log Pages: Not Supported 00:27:30.277 Persistent Event Log Pages: Not Supported 00:27:30.277 Supported Log Pages Log Page: May Support 00:27:30.277 Commands Supported & Effects Log Page: Not Supported 00:27:30.277 Feature Identifiers & Effects Log Page:May Support 00:27:30.277 NVMe-MI Commands & Effects Log Page: May Support 00:27:30.277 Data Area 4 for Telemetry Log: Not Supported 00:27:30.277 Error Log Page Entries Supported: 1 00:27:30.277 Keep Alive: Not Supported 00:27:30.277 00:27:30.277 NVM Command Set Attributes 00:27:30.277 ========================== 00:27:30.277 Submission Queue Entry Size 00:27:30.277 Max: 1 00:27:30.277 Min: 1 00:27:30.277 Completion Queue Entry Size 00:27:30.277 Max: 1 00:27:30.277 Min: 1 00:27:30.277 Number of Namespaces: 0 00:27:30.277 Compare Command: Not Supported 00:27:30.277 Write Uncorrectable Command: Not Supported 00:27:30.277 Dataset Management Command: Not Supported 00:27:30.277 Write Zeroes Command: Not Supported 00:27:30.277 Set Features Save Field: Not Supported 00:27:30.277 Reservations: Not Supported 00:27:30.277 Timestamp: Not Supported 00:27:30.277 Copy: Not Supported 00:27:30.277 Volatile Write Cache: Not Present 00:27:30.277 Atomic Write Unit (Normal): 1 00:27:30.277 Atomic Write Unit (PFail): 1 00:27:30.277 Atomic Compare & Write Unit: 1 00:27:30.277 Fused Compare & Write: Not Supported 00:27:30.277 Scatter-Gather List 00:27:30.277 SGL Command Set: Supported 00:27:30.277 SGL Keyed: Not Supported 00:27:30.277 SGL Bit Bucket Descriptor: Not Supported 00:27:30.277 SGL Metadata Pointer: Not Supported 00:27:30.277 Oversized SGL: Not Supported 00:27:30.277 SGL Metadata Address: Not Supported 00:27:30.277 SGL Offset: Supported 00:27:30.277 Transport SGL Data Block: Not Supported 00:27:30.277 Replay Protected Memory Block: Not Supported 00:27:30.277 00:27:30.277 Firmware Slot Information 00:27:30.277 ========================= 00:27:30.277 Active slot: 0 00:27:30.277 00:27:30.277 00:27:30.277 Error Log 00:27:30.277 ========= 00:27:30.277 00:27:30.277 Active Namespaces 00:27:30.277 ================= 00:27:30.277 Discovery Log Page 00:27:30.277 ================== 00:27:30.277 Generation Counter: 2 00:27:30.277 Number of Records: 2 00:27:30.277 Record Format: 0 00:27:30.277 00:27:30.277 Discovery Log Entry 0 00:27:30.277 ---------------------- 00:27:30.277 Transport Type: 3 (TCP) 00:27:30.277 Address Family: 1 (IPv4) 00:27:30.277 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:30.277 Entry Flags: 00:27:30.277 Duplicate Returned Information: 0 00:27:30.277 Explicit Persistent Connection Support for Discovery: 0 00:27:30.277 Transport Requirements: 00:27:30.277 Secure Channel: Not Specified 00:27:30.277 Port ID: 1 (0x0001) 00:27:30.277 Controller ID: 65535 (0xffff) 00:27:30.277 Admin Max SQ Size: 32 00:27:30.277 Transport Service Identifier: 4420 00:27:30.277 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:30.277 Transport Address: 10.0.0.1 00:27:30.277 Discovery Log Entry 1 00:27:30.277 ---------------------- 00:27:30.277 Transport Type: 3 (TCP) 00:27:30.277 Address Family: 1 (IPv4) 00:27:30.277 Subsystem Type: 2 (NVM Subsystem) 00:27:30.277 Entry Flags: 00:27:30.277 Duplicate Returned Information: 0 00:27:30.277 Explicit Persistent Connection Support for Discovery: 0 00:27:30.277 Transport Requirements: 00:27:30.277 Secure Channel: Not Specified 00:27:30.277 Port ID: 1 (0x0001) 00:27:30.277 Controller ID: 65535 (0xffff) 00:27:30.277 Admin Max SQ Size: 32 00:27:30.277 Transport Service Identifier: 4420 00:27:30.277 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:30.277 Transport Address: 10.0.0.1 00:27:30.277 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:30.536 get_feature(0x01) failed 00:27:30.536 get_feature(0x02) failed 00:27:30.536 get_feature(0x04) failed 00:27:30.536 ===================================================== 00:27:30.536 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:30.536 ===================================================== 00:27:30.536 Controller Capabilities/Features 00:27:30.536 ================================ 00:27:30.536 Vendor ID: 0000 00:27:30.536 Subsystem Vendor ID: 0000 00:27:30.536 Serial Number: a230674d1fe609bdb7a0 00:27:30.536 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:30.536 Firmware Version: 6.8.9-20 00:27:30.536 Recommended Arb Burst: 6 00:27:30.536 IEEE OUI Identifier: 00 00 00 00:27:30.536 Multi-path I/O 00:27:30.536 May have multiple subsystem ports: Yes 00:27:30.536 May have multiple controllers: Yes 00:27:30.536 Associated with SR-IOV VF: No 00:27:30.536 Max Data Transfer Size: Unlimited 00:27:30.536 Max Number of Namespaces: 1024 00:27:30.536 Max Number of I/O Queues: 128 00:27:30.536 NVMe Specification Version (VS): 1.3 00:27:30.536 NVMe Specification Version (Identify): 1.3 00:27:30.536 Maximum Queue Entries: 1024 00:27:30.536 Contiguous Queues Required: No 00:27:30.536 Arbitration Mechanisms Supported 00:27:30.536 Weighted Round Robin: Not Supported 00:27:30.536 Vendor Specific: Not Supported 00:27:30.536 Reset Timeout: 7500 ms 00:27:30.536 Doorbell Stride: 4 bytes 00:27:30.537 NVM Subsystem Reset: Not Supported 00:27:30.537 Command Sets Supported 00:27:30.537 NVM Command Set: Supported 00:27:30.537 Boot Partition: Not Supported 00:27:30.537 Memory Page Size Minimum: 4096 bytes 00:27:30.537 Memory Page Size Maximum: 4096 bytes 00:27:30.537 Persistent Memory Region: Not Supported 00:27:30.537 Optional Asynchronous Events Supported 00:27:30.537 Namespace Attribute Notices: Supported 00:27:30.537 Firmware Activation Notices: Not Supported 00:27:30.537 ANA Change Notices: Supported 00:27:30.537 PLE Aggregate Log Change Notices: Not Supported 00:27:30.537 LBA Status Info Alert Notices: Not Supported 00:27:30.537 EGE Aggregate Log Change Notices: Not Supported 00:27:30.537 Normal NVM Subsystem Shutdown event: Not Supported 00:27:30.537 Zone Descriptor Change Notices: Not Supported 00:27:30.537 Discovery Log Change Notices: Not Supported 00:27:30.537 Controller Attributes 00:27:30.537 128-bit Host Identifier: Supported 00:27:30.537 Non-Operational Permissive Mode: Not Supported 00:27:30.537 NVM Sets: Not Supported 00:27:30.537 Read Recovery Levels: Not Supported 00:27:30.537 Endurance Groups: Not Supported 00:27:30.537 Predictable Latency Mode: Not Supported 00:27:30.537 Traffic Based Keep ALive: Supported 00:27:30.537 Namespace Granularity: Not Supported 00:27:30.537 SQ Associations: Not Supported 00:27:30.537 UUID List: Not Supported 00:27:30.537 Multi-Domain Subsystem: Not Supported 00:27:30.537 Fixed Capacity Management: Not Supported 00:27:30.537 Variable Capacity Management: Not Supported 00:27:30.537 Delete Endurance Group: Not Supported 00:27:30.537 Delete NVM Set: Not Supported 00:27:30.537 Extended LBA Formats Supported: Not Supported 00:27:30.537 Flexible Data Placement Supported: Not Supported 00:27:30.537 00:27:30.537 Controller Memory Buffer Support 00:27:30.537 ================================ 00:27:30.537 Supported: No 00:27:30.537 00:27:30.537 Persistent Memory Region Support 00:27:30.537 ================================ 00:27:30.537 Supported: No 00:27:30.537 00:27:30.537 Admin Command Set Attributes 00:27:30.537 ============================ 00:27:30.537 Security Send/Receive: Not Supported 00:27:30.537 Format NVM: Not Supported 00:27:30.537 Firmware Activate/Download: Not Supported 00:27:30.537 Namespace Management: Not Supported 00:27:30.537 Device Self-Test: Not Supported 00:27:30.537 Directives: Not Supported 00:27:30.537 NVMe-MI: Not Supported 00:27:30.537 Virtualization Management: Not Supported 00:27:30.537 Doorbell Buffer Config: Not Supported 00:27:30.537 Get LBA Status Capability: Not Supported 00:27:30.537 Command & Feature Lockdown Capability: Not Supported 00:27:30.537 Abort Command Limit: 4 00:27:30.537 Async Event Request Limit: 4 00:27:30.537 Number of Firmware Slots: N/A 00:27:30.537 Firmware Slot 1 Read-Only: N/A 00:27:30.537 Firmware Activation Without Reset: N/A 00:27:30.537 Multiple Update Detection Support: N/A 00:27:30.537 Firmware Update Granularity: No Information Provided 00:27:30.537 Per-Namespace SMART Log: Yes 00:27:30.537 Asymmetric Namespace Access Log Page: Supported 00:27:30.537 ANA Transition Time : 10 sec 00:27:30.537 00:27:30.537 Asymmetric Namespace Access Capabilities 00:27:30.537 ANA Optimized State : Supported 00:27:30.537 ANA Non-Optimized State : Supported 00:27:30.537 ANA Inaccessible State : Supported 00:27:30.537 ANA Persistent Loss State : Supported 00:27:30.537 ANA Change State : Supported 00:27:30.537 ANAGRPID is not changed : No 00:27:30.537 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:30.537 00:27:30.537 ANA Group Identifier Maximum : 128 00:27:30.537 Number of ANA Group Identifiers : 128 00:27:30.537 Max Number of Allowed Namespaces : 1024 00:27:30.537 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:30.537 Command Effects Log Page: Supported 00:27:30.537 Get Log Page Extended Data: Supported 00:27:30.537 Telemetry Log Pages: Not Supported 00:27:30.537 Persistent Event Log Pages: Not Supported 00:27:30.537 Supported Log Pages Log Page: May Support 00:27:30.537 Commands Supported & Effects Log Page: Not Supported 00:27:30.537 Feature Identifiers & Effects Log Page:May Support 00:27:30.537 NVMe-MI Commands & Effects Log Page: May Support 00:27:30.537 Data Area 4 for Telemetry Log: Not Supported 00:27:30.537 Error Log Page Entries Supported: 128 00:27:30.537 Keep Alive: Supported 00:27:30.537 Keep Alive Granularity: 1000 ms 00:27:30.537 00:27:30.537 NVM Command Set Attributes 00:27:30.537 ========================== 00:27:30.537 Submission Queue Entry Size 00:27:30.537 Max: 64 00:27:30.537 Min: 64 00:27:30.537 Completion Queue Entry Size 00:27:30.537 Max: 16 00:27:30.537 Min: 16 00:27:30.537 Number of Namespaces: 1024 00:27:30.537 Compare Command: Not Supported 00:27:30.537 Write Uncorrectable Command: Not Supported 00:27:30.537 Dataset Management Command: Supported 00:27:30.537 Write Zeroes Command: Supported 00:27:30.537 Set Features Save Field: Not Supported 00:27:30.537 Reservations: Not Supported 00:27:30.537 Timestamp: Not Supported 00:27:30.537 Copy: Not Supported 00:27:30.537 Volatile Write Cache: Present 00:27:30.537 Atomic Write Unit (Normal): 1 00:27:30.537 Atomic Write Unit (PFail): 1 00:27:30.537 Atomic Compare & Write Unit: 1 00:27:30.537 Fused Compare & Write: Not Supported 00:27:30.537 Scatter-Gather List 00:27:30.537 SGL Command Set: Supported 00:27:30.537 SGL Keyed: Not Supported 00:27:30.537 SGL Bit Bucket Descriptor: Not Supported 00:27:30.537 SGL Metadata Pointer: Not Supported 00:27:30.537 Oversized SGL: Not Supported 00:27:30.537 SGL Metadata Address: Not Supported 00:27:30.537 SGL Offset: Supported 00:27:30.537 Transport SGL Data Block: Not Supported 00:27:30.537 Replay Protected Memory Block: Not Supported 00:27:30.537 00:27:30.537 Firmware Slot Information 00:27:30.537 ========================= 00:27:30.537 Active slot: 0 00:27:30.537 00:27:30.537 Asymmetric Namespace Access 00:27:30.537 =========================== 00:27:30.537 Change Count : 0 00:27:30.537 Number of ANA Group Descriptors : 1 00:27:30.537 ANA Group Descriptor : 0 00:27:30.537 ANA Group ID : 1 00:27:30.537 Number of NSID Values : 1 00:27:30.537 Change Count : 0 00:27:30.537 ANA State : 1 00:27:30.537 Namespace Identifier : 1 00:27:30.537 00:27:30.537 Commands Supported and Effects 00:27:30.537 ============================== 00:27:30.537 Admin Commands 00:27:30.537 -------------- 00:27:30.537 Get Log Page (02h): Supported 00:27:30.538 Identify (06h): Supported 00:27:30.538 Abort (08h): Supported 00:27:30.538 Set Features (09h): Supported 00:27:30.538 Get Features (0Ah): Supported 00:27:30.538 Asynchronous Event Request (0Ch): Supported 00:27:30.538 Keep Alive (18h): Supported 00:27:30.538 I/O Commands 00:27:30.538 ------------ 00:27:30.538 Flush (00h): Supported 00:27:30.538 Write (01h): Supported LBA-Change 00:27:30.538 Read (02h): Supported 00:27:30.538 Write Zeroes (08h): Supported LBA-Change 00:27:30.538 Dataset Management (09h): Supported 00:27:30.538 00:27:30.538 Error Log 00:27:30.538 ========= 00:27:30.538 Entry: 0 00:27:30.538 Error Count: 0x3 00:27:30.538 Submission Queue Id: 0x0 00:27:30.538 Command Id: 0x5 00:27:30.538 Phase Bit: 0 00:27:30.538 Status Code: 0x2 00:27:30.538 Status Code Type: 0x0 00:27:30.538 Do Not Retry: 1 00:27:30.538 Error Location: 0x28 00:27:30.538 LBA: 0x0 00:27:30.538 Namespace: 0x0 00:27:30.538 Vendor Log Page: 0x0 00:27:30.538 ----------- 00:27:30.538 Entry: 1 00:27:30.538 Error Count: 0x2 00:27:30.538 Submission Queue Id: 0x0 00:27:30.538 Command Id: 0x5 00:27:30.538 Phase Bit: 0 00:27:30.538 Status Code: 0x2 00:27:30.538 Status Code Type: 0x0 00:27:30.538 Do Not Retry: 1 00:27:30.538 Error Location: 0x28 00:27:30.538 LBA: 0x0 00:27:30.538 Namespace: 0x0 00:27:30.538 Vendor Log Page: 0x0 00:27:30.538 ----------- 00:27:30.538 Entry: 2 00:27:30.538 Error Count: 0x1 00:27:30.538 Submission Queue Id: 0x0 00:27:30.538 Command Id: 0x4 00:27:30.538 Phase Bit: 0 00:27:30.538 Status Code: 0x2 00:27:30.538 Status Code Type: 0x0 00:27:30.538 Do Not Retry: 1 00:27:30.538 Error Location: 0x28 00:27:30.538 LBA: 0x0 00:27:30.538 Namespace: 0x0 00:27:30.538 Vendor Log Page: 0x0 00:27:30.538 00:27:30.538 Number of Queues 00:27:30.538 ================ 00:27:30.538 Number of I/O Submission Queues: 128 00:27:30.538 Number of I/O Completion Queues: 128 00:27:30.538 00:27:30.538 ZNS Specific Controller Data 00:27:30.538 ============================ 00:27:30.538 Zone Append Size Limit: 0 00:27:30.538 00:27:30.538 00:27:30.538 Active Namespaces 00:27:30.538 ================= 00:27:30.538 get_feature(0x05) failed 00:27:30.538 Namespace ID:1 00:27:30.538 Command Set Identifier: NVM (00h) 00:27:30.538 Deallocate: Supported 00:27:30.538 Deallocated/Unwritten Error: Not Supported 00:27:30.538 Deallocated Read Value: Unknown 00:27:30.538 Deallocate in Write Zeroes: Not Supported 00:27:30.538 Deallocated Guard Field: 0xFFFF 00:27:30.538 Flush: Supported 00:27:30.538 Reservation: Not Supported 00:27:30.538 Namespace Sharing Capabilities: Multiple Controllers 00:27:30.538 Size (in LBAs): 1310720 (5GiB) 00:27:30.538 Capacity (in LBAs): 1310720 (5GiB) 00:27:30.538 Utilization (in LBAs): 1310720 (5GiB) 00:27:30.538 UUID: d9107a62-84b8-4b4e-907b-20e9bc96c084 00:27:30.538 Thin Provisioning: Not Supported 00:27:30.538 Per-NS Atomic Units: Yes 00:27:30.538 Atomic Boundary Size (Normal): 0 00:27:30.538 Atomic Boundary Size (PFail): 0 00:27:30.538 Atomic Boundary Offset: 0 00:27:30.538 NGUID/EUI64 Never Reused: No 00:27:30.538 ANA group ID: 1 00:27:30.538 Namespace Write Protected: No 00:27:30.538 Number of LBA Formats: 1 00:27:30.538 Current LBA Format: LBA Format #00 00:27:30.538 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:27:30.538 00:27:30.538 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:30.538 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:30.538 06:29:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:27:30.538 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:30.538 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:27:30.538 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:30.538 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:30.538 rmmod nvme_tcp 00:27:30.538 rmmod nvme_fabrics 00:27:30.538 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:30.538 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:27:30.538 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:27:30.538 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:27:30.538 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:30.538 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:30.538 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:30.538 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:27:30.538 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:27:30.538 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:30.538 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:27:30.538 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:30.538 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:30.538 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:30.797 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:30.797 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:30.797 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:30.797 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:30.797 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:30.797 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:30.797 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:30.797 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:30.797 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:30.797 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:30.797 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:30.797 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:30.797 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:30.797 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.797 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:30.797 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:30.797 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:27:30.797 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:30.797 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:30.797 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:27:30.797 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:30.797 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:30.797 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:30.797 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:30.797 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:30.797 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:31.056 06:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:31.315 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:31.573 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:27:31.573 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:27:31.573 00:27:31.573 real 0m2.602s 00:27:31.573 user 0m0.899s 00:27:31.573 sys 0m1.081s 00:27:31.573 06:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:31.574 ************************************ 00:27:31.574 END TEST nvmf_identify_kernel_target 00:27:31.574 06:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:31.574 ************************************ 00:27:31.574 06:29:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:31.574 06:29:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:31.574 06:29:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:31.574 06:29:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.574 ************************************ 00:27:31.574 START TEST nvmf_auth_host 00:27:31.574 ************************************ 00:27:31.574 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:31.833 * Looking for test storage... 00:27:31.833 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:31.833 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:31.833 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:31.833 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:27:31.833 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:31.833 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:31.833 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:31.833 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:31.833 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:31.833 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:31.833 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:31.833 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:31.833 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:31.833 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:31.833 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:31.833 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:31.833 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:27:31.833 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:27:31.833 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:31.833 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:31.833 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:27:31.833 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:27:31.833 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:31.833 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:27:31.833 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:31.833 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:27:31.833 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:27:31.833 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:31.833 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:27:31.833 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:31.833 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:31.833 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:31.833 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:27:31.833 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:31.833 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:31.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.833 --rc genhtml_branch_coverage=1 00:27:31.833 --rc genhtml_function_coverage=1 00:27:31.833 --rc genhtml_legend=1 00:27:31.833 --rc geninfo_all_blocks=1 00:27:31.833 --rc geninfo_unexecuted_blocks=1 00:27:31.833 00:27:31.833 ' 00:27:31.833 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:31.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.834 --rc genhtml_branch_coverage=1 00:27:31.834 --rc genhtml_function_coverage=1 00:27:31.834 --rc genhtml_legend=1 00:27:31.834 --rc geninfo_all_blocks=1 00:27:31.834 --rc geninfo_unexecuted_blocks=1 00:27:31.834 00:27:31.834 ' 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:31.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.834 --rc genhtml_branch_coverage=1 00:27:31.834 --rc genhtml_function_coverage=1 00:27:31.834 --rc genhtml_legend=1 00:27:31.834 --rc geninfo_all_blocks=1 00:27:31.834 --rc geninfo_unexecuted_blocks=1 00:27:31.834 00:27:31.834 ' 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:31.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.834 --rc genhtml_branch_coverage=1 00:27:31.834 --rc genhtml_function_coverage=1 00:27:31.834 --rc genhtml_legend=1 00:27:31.834 --rc geninfo_all_blocks=1 00:27:31.834 --rc geninfo_unexecuted_blocks=1 00:27:31.834 00:27:31.834 ' 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:31.834 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:31.834 Cannot find device "nvmf_init_br" 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:31.834 Cannot find device "nvmf_init_br2" 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:31.834 Cannot find device "nvmf_tgt_br" 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:31.834 Cannot find device "nvmf_tgt_br2" 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:31.834 Cannot find device "nvmf_init_br" 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:31.834 Cannot find device "nvmf_init_br2" 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:31.834 Cannot find device "nvmf_tgt_br" 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:31.834 Cannot find device "nvmf_tgt_br2" 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:31.834 Cannot find device "nvmf_br" 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:31.834 Cannot find device "nvmf_init_if" 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:31.834 Cannot find device "nvmf_init_if2" 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:31.834 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:31.834 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:27:31.834 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:31.835 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:31.835 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:31.835 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:32.094 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:32.094 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:27:32.094 00:27:32.094 --- 10.0.0.3 ping statistics --- 00:27:32.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.094 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:32.094 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:32.094 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.074 ms 00:27:32.094 00:27:32.094 --- 10.0.0.4 ping statistics --- 00:27:32.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.094 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:32.094 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:32.094 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:27:32.094 00:27:32.094 --- 10.0.0.1 ping statistics --- 00:27:32.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.094 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:32.094 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:32.094 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:27:32.094 00:27:32.094 --- 10.0.0.2 ping statistics --- 00:27:32.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.094 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:32.094 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:32.095 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:32.095 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.095 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=111406 00:27:32.095 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 111406 00:27:32.095 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 111406 ']' 00:27:32.095 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:32.095 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:32.095 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:32.095 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:32.095 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.095 06:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:33.029 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:33.029 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:27:33.029 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:33.029 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:33.029 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.029 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:33.029 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:33.029 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:33.029 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:33.029 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:33.029 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:33.029 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:33.029 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:33.029 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:33.029 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=49b70ce1ece953209be56846b371f28b 00:27:33.029 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:33.029 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.BzC 00:27:33.029 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 49b70ce1ece953209be56846b371f28b 0 00:27:33.029 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 49b70ce1ece953209be56846b371f28b 0 00:27:33.029 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:33.029 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:33.029 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=49b70ce1ece953209be56846b371f28b 00:27:33.029 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:33.029 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:33.029 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.BzC 00:27:33.029 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.BzC 00:27:33.029 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.BzC 00:27:33.030 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:33.030 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:33.030 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:33.030 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:33.030 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:33.030 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:33.030 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:33.030 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a5caf70e449b493afa0c9746f426118cfe3bc50d3c7868ede1e0f8346f525579 00:27:33.030 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:33.030 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.4LX 00:27:33.030 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a5caf70e449b493afa0c9746f426118cfe3bc50d3c7868ede1e0f8346f525579 3 00:27:33.030 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a5caf70e449b493afa0c9746f426118cfe3bc50d3c7868ede1e0f8346f525579 3 00:27:33.288 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:33.288 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:33.288 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a5caf70e449b493afa0c9746f426118cfe3bc50d3c7868ede1e0f8346f525579 00:27:33.288 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:33.288 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:33.288 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.4LX 00:27:33.288 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.4LX 00:27:33.288 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.4LX 00:27:33.288 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:33.288 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:33.288 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:33.288 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:33.288 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:33.288 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:33.288 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:33.288 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1e7c3aaf480ae3e5b53c0bcf0fca586fb5ec41df2f3a6536 00:27:33.288 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:33.288 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.10V 00:27:33.288 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1e7c3aaf480ae3e5b53c0bcf0fca586fb5ec41df2f3a6536 0 00:27:33.288 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1e7c3aaf480ae3e5b53c0bcf0fca586fb5ec41df2f3a6536 0 00:27:33.288 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:33.288 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:33.288 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1e7c3aaf480ae3e5b53c0bcf0fca586fb5ec41df2f3a6536 00:27:33.288 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:33.288 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:33.288 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.10V 00:27:33.288 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.10V 00:27:33.288 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.10V 00:27:33.288 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:33.288 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:33.288 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:33.288 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:33.288 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:33.288 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:33.288 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:33.288 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8e4fd2f292e85d74a26862460149c1335758c1a098d3cf5a 00:27:33.288 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ZW8 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8e4fd2f292e85d74a26862460149c1335758c1a098d3cf5a 2 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8e4fd2f292e85d74a26862460149c1335758c1a098d3cf5a 2 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8e4fd2f292e85d74a26862460149c1335758c1a098d3cf5a 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ZW8 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ZW8 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.ZW8 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=83441cddaf45d3872d836762489bbf9e 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.UsE 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 83441cddaf45d3872d836762489bbf9e 1 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 83441cddaf45d3872d836762489bbf9e 1 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=83441cddaf45d3872d836762489bbf9e 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.UsE 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.UsE 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.UsE 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cabd29c30fc1be58e3244221800c0a27 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.CvM 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cabd29c30fc1be58e3244221800c0a27 1 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cabd29c30fc1be58e3244221800c0a27 1 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cabd29c30fc1be58e3244221800c0a27 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.CvM 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.CvM 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.CvM 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e2f1032ee5b2b8ae71a9e98b77e7803a35e4b7cdd235bd2e 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Cgt 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e2f1032ee5b2b8ae71a9e98b77e7803a35e4b7cdd235bd2e 2 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e2f1032ee5b2b8ae71a9e98b77e7803a35e4b7cdd235bd2e 2 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e2f1032ee5b2b8ae71a9e98b77e7803a35e4b7cdd235bd2e 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:33.289 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Cgt 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Cgt 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Cgt 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8a1b203dcd812e570e4de9b805f8a2cd 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Aur 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8a1b203dcd812e570e4de9b805f8a2cd 0 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8a1b203dcd812e570e4de9b805f8a2cd 0 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8a1b203dcd812e570e4de9b805f8a2cd 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Aur 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Aur 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Aur 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=78e861f07168060516343eb4d73edbeb993bf821760b4ffbc0679155adc6f18f 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Qil 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 78e861f07168060516343eb4d73edbeb993bf821760b4ffbc0679155adc6f18f 3 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 78e861f07168060516343eb4d73edbeb993bf821760b4ffbc0679155adc6f18f 3 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=78e861f07168060516343eb4d73edbeb993bf821760b4ffbc0679155adc6f18f 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Qil 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Qil 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Qil 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 111406 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 111406 ']' 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:33.548 06:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.BzC 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.4LX ]] 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4LX 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.10V 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.ZW8 ]] 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ZW8 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.UsE 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.CvM ]] 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.CvM 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Cgt 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Aur ]] 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Aur 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Qil 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:33.807 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:34.065 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:34.065 06:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:34.323 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:34.324 Waiting for block devices as requested 00:27:34.324 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:34.324 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:34.889 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:34.889 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:34.889 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:34.889 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:34.889 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:34.889 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:34.889 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:34.889 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:34.889 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:27:34.889 No valid GPT data, bailing 00:27:34.889 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:34.889 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:34.889 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:34.889 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:34.889 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:34.889 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:27:34.889 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:27:34.889 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:27:34.889 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:27:34.889 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:34.889 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:27:34.889 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:27:34.889 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:27:34.889 No valid GPT data, bailing 00:27:34.889 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:27:34.889 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:27:34.890 No valid GPT data, bailing 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:27:34.890 No valid GPT data, bailing 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:34.890 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -a 10.0.0.1 -t tcp -s 4420 00:27:35.148 00:27:35.148 Discovery Log Number of Records 2, Generation counter 2 00:27:35.148 =====Discovery Log Entry 0====== 00:27:35.148 trtype: tcp 00:27:35.148 adrfam: ipv4 00:27:35.148 subtype: current discovery subsystem 00:27:35.148 treq: not specified, sq flow control disable supported 00:27:35.148 portid: 1 00:27:35.148 trsvcid: 4420 00:27:35.148 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:35.148 traddr: 10.0.0.1 00:27:35.148 eflags: none 00:27:35.148 sectype: none 00:27:35.148 =====Discovery Log Entry 1====== 00:27:35.148 trtype: tcp 00:27:35.148 adrfam: ipv4 00:27:35.148 subtype: nvme subsystem 00:27:35.148 treq: not specified, sq flow control disable supported 00:27:35.148 portid: 1 00:27:35.148 trsvcid: 4420 00:27:35.148 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:35.148 traddr: 10.0.0.1 00:27:35.148 eflags: none 00:27:35.148 sectype: none 00:27:35.148 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:35.148 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:35.148 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:35.148 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:35.148 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.148 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.148 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:35.148 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:35.148 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU3YzNhYWY0ODBhZTNlNWI1M2MwYmNmMGZjYTU4NmZiNWVjNDFkZjJmM2E2NTM2w8gllQ==: 00:27:35.148 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: 00:27:35.148 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.148 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:35.148 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU3YzNhYWY0ODBhZTNlNWI1M2MwYmNmMGZjYTU4NmZiNWVjNDFkZjJmM2E2NTM2w8gllQ==: 00:27:35.148 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: ]] 00:27:35.148 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: 00:27:35.148 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:35.148 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:35.148 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:35.148 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.149 nvme0n1 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDliNzBjZTFlY2U5NTMyMDliZTU2ODQ2YjM3MWYyOGLVegYT: 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDliNzBjZTFlY2U5NTMyMDliZTU2ODQ2YjM3MWYyOGLVegYT: 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: ]] 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.149 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.408 nvme0n1 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU3YzNhYWY0ODBhZTNlNWI1M2MwYmNmMGZjYTU4NmZiNWVjNDFkZjJmM2E2NTM2w8gllQ==: 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU3YzNhYWY0ODBhZTNlNWI1M2MwYmNmMGZjYTU4NmZiNWVjNDFkZjJmM2E2NTM2w8gllQ==: 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: ]] 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.408 nvme0n1 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.408 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODM0NDFjZGRhZjQ1ZDM4NzJkODM2NzYyNDg5YmJmOWX2aN3y: 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODM0NDFjZGRhZjQ1ZDM4NzJkODM2NzYyNDg5YmJmOWX2aN3y: 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: ]] 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.667 06:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.667 nvme0n1 00:27:35.667 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.667 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.667 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.667 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.667 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.667 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.667 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.667 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.667 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.667 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.667 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.667 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.667 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:35.667 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.667 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.667 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:35.667 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:35.668 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTJmMTAzMmVlNWIyYjhhZTcxYTllOThiNzdlNzgwM2EzNWU0YjdjZGQyMzViZDJlpa2v4g==: 00:27:35.668 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: 00:27:35.668 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.668 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:35.668 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTJmMTAzMmVlNWIyYjhhZTcxYTllOThiNzdlNzgwM2EzNWU0YjdjZGQyMzViZDJlpa2v4g==: 00:27:35.668 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: ]] 00:27:35.668 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: 00:27:35.668 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:35.668 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.668 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:35.668 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:35.668 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:35.668 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.668 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:35.668 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.668 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.668 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.668 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.668 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.668 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.668 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.668 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.668 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.668 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.668 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.668 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.668 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.668 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.668 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:35.668 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.668 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.927 nvme0n1 00:27:35.927 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.927 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.927 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.927 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.927 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.927 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.927 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.927 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.927 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.927 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.927 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.927 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.927 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:35.927 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.927 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.927 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:35.927 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:35.927 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzhlODYxZjA3MTY4MDYwNTE2MzQzZWI0ZDczZWRiZWI5OTNiZjgyMTc2MGI0ZmZiYzA2NzkxNTVhZGM2ZjE4Zh3zEm4=: 00:27:35.927 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:35.927 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.927 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:35.927 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzhlODYxZjA3MTY4MDYwNTE2MzQzZWI0ZDczZWRiZWI5OTNiZjgyMTc2MGI0ZmZiYzA2NzkxNTVhZGM2ZjE4Zh3zEm4=: 00:27:35.927 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:35.927 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:35.927 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.927 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:35.927 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:35.927 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:35.927 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.927 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:35.927 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.927 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.927 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.927 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.927 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.927 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.928 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.928 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.928 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.928 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.928 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.928 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.928 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.928 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.928 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:35.928 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.928 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.928 nvme0n1 00:27:35.928 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.928 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.928 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.928 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.928 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.928 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.928 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.928 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.928 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.928 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.928 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.928 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:35.928 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.928 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:35.928 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.928 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.928 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:35.928 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:35.928 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDliNzBjZTFlY2U5NTMyMDliZTU2ODQ2YjM3MWYyOGLVegYT: 00:27:35.928 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: 00:27:35.928 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.928 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:36.187 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDliNzBjZTFlY2U5NTMyMDliZTU2ODQ2YjM3MWYyOGLVegYT: 00:27:36.187 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: ]] 00:27:36.187 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: 00:27:36.187 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:36.187 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.187 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:36.187 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:36.187 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:36.187 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.187 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:36.187 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.187 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.187 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.187 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.187 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:36.187 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:36.187 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:36.187 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.187 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.187 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:36.187 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.187 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:36.187 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:36.187 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:36.187 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:36.187 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.187 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.446 nvme0n1 00:27:36.446 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.446 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.446 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.446 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.446 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.446 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.446 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.446 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.446 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.446 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.446 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.446 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.446 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:36.446 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.446 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:36.446 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:36.446 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU3YzNhYWY0ODBhZTNlNWI1M2MwYmNmMGZjYTU4NmZiNWVjNDFkZjJmM2E2NTM2w8gllQ==: 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU3YzNhYWY0ODBhZTNlNWI1M2MwYmNmMGZjYTU4NmZiNWVjNDFkZjJmM2E2NTM2w8gllQ==: 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: ]] 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.447 nvme0n1 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.447 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.707 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.707 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.707 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:36.707 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.707 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:36.707 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:36.708 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:36.708 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODM0NDFjZGRhZjQ1ZDM4NzJkODM2NzYyNDg5YmJmOWX2aN3y: 00:27:36.708 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: 00:27:36.708 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:36.708 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:36.708 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODM0NDFjZGRhZjQ1ZDM4NzJkODM2NzYyNDg5YmJmOWX2aN3y: 00:27:36.708 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: ]] 00:27:36.708 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: 00:27:36.708 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:36.708 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.708 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:36.708 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:36.708 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:36.708 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.708 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:36.708 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.708 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.708 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.708 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.708 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:36.708 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:36.708 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:36.708 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.708 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.708 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:36.708 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.708 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:36.708 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:36.708 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:36.708 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:36.708 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.708 06:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.708 nvme0n1 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTJmMTAzMmVlNWIyYjhhZTcxYTllOThiNzdlNzgwM2EzNWU0YjdjZGQyMzViZDJlpa2v4g==: 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTJmMTAzMmVlNWIyYjhhZTcxYTllOThiNzdlNzgwM2EzNWU0YjdjZGQyMzViZDJlpa2v4g==: 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: ]] 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.708 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.970 nvme0n1 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzhlODYxZjA3MTY4MDYwNTE2MzQzZWI0ZDczZWRiZWI5OTNiZjgyMTc2MGI0ZmZiYzA2NzkxNTVhZGM2ZjE4Zh3zEm4=: 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzhlODYxZjA3MTY4MDYwNTE2MzQzZWI0ZDczZWRiZWI5OTNiZjgyMTc2MGI0ZmZiYzA2NzkxNTVhZGM2ZjE4Zh3zEm4=: 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.970 nvme0n1 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.970 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.332 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.332 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.332 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.332 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.332 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.332 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:37.332 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.332 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:37.332 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.332 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:37.332 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:37.332 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:37.332 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDliNzBjZTFlY2U5NTMyMDliZTU2ODQ2YjM3MWYyOGLVegYT: 00:27:37.332 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: 00:27:37.332 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:37.332 06:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:37.593 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDliNzBjZTFlY2U5NTMyMDliZTU2ODQ2YjM3MWYyOGLVegYT: 00:27:37.593 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: ]] 00:27:37.593 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: 00:27:37.593 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:37.593 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.593 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:37.593 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:37.593 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:37.593 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.593 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:37.593 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.593 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.593 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.593 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.593 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:37.593 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:37.593 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:37.593 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.593 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.593 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:37.594 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.594 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:37.594 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:37.594 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:37.594 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:37.594 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.594 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.855 nvme0n1 00:27:37.855 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.855 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.855 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU3YzNhYWY0ODBhZTNlNWI1M2MwYmNmMGZjYTU4NmZiNWVjNDFkZjJmM2E2NTM2w8gllQ==: 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU3YzNhYWY0ODBhZTNlNWI1M2MwYmNmMGZjYTU4NmZiNWVjNDFkZjJmM2E2NTM2w8gllQ==: 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: ]] 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.856 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.117 nvme0n1 00:27:38.117 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.117 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.117 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.117 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.117 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.117 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.117 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.117 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.117 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.117 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.117 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.117 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.117 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:38.117 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.117 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:38.117 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:38.117 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:38.117 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODM0NDFjZGRhZjQ1ZDM4NzJkODM2NzYyNDg5YmJmOWX2aN3y: 00:27:38.117 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: 00:27:38.117 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:38.117 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:38.118 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODM0NDFjZGRhZjQ1ZDM4NzJkODM2NzYyNDg5YmJmOWX2aN3y: 00:27:38.118 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: ]] 00:27:38.118 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: 00:27:38.118 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:38.118 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.118 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:38.118 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:38.118 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:38.118 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.118 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:38.118 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.118 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.118 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.118 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.118 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.118 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.118 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.118 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.118 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.118 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.118 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.118 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.118 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.118 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.118 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:38.118 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.118 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.381 nvme0n1 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTJmMTAzMmVlNWIyYjhhZTcxYTllOThiNzdlNzgwM2EzNWU0YjdjZGQyMzViZDJlpa2v4g==: 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTJmMTAzMmVlNWIyYjhhZTcxYTllOThiNzdlNzgwM2EzNWU0YjdjZGQyMzViZDJlpa2v4g==: 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: ]] 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.381 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.382 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:38.382 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.382 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.642 nvme0n1 00:27:38.642 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.642 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.642 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.642 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.642 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.642 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.642 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.642 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.642 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.642 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.642 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.642 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.642 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:38.642 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.642 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:38.642 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:38.642 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:38.642 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzhlODYxZjA3MTY4MDYwNTE2MzQzZWI0ZDczZWRiZWI5OTNiZjgyMTc2MGI0ZmZiYzA2NzkxNTVhZGM2ZjE4Zh3zEm4=: 00:27:38.642 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:38.642 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:38.642 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:38.642 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzhlODYxZjA3MTY4MDYwNTE2MzQzZWI0ZDczZWRiZWI5OTNiZjgyMTc2MGI0ZmZiYzA2NzkxNTVhZGM2ZjE4Zh3zEm4=: 00:27:38.642 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:38.642 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:38.643 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.643 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:38.643 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:38.643 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:38.643 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.643 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:38.643 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.643 06:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.643 06:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.643 06:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.643 06:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.643 06:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.643 06:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.643 06:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.643 06:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.643 06:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.643 06:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.643 06:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.643 06:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.643 06:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.643 06:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:38.643 06:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.643 06:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.904 nvme0n1 00:27:38.904 06:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.904 06:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.904 06:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.904 06:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.904 06:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.904 06:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.904 06:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.904 06:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.904 06:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.904 06:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.904 06:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.904 06:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:38.904 06:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.904 06:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:38.904 06:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.904 06:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:38.904 06:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:38.904 06:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:38.904 06:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDliNzBjZTFlY2U5NTMyMDliZTU2ODQ2YjM3MWYyOGLVegYT: 00:27:38.904 06:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: 00:27:38.904 06:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:38.904 06:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:40.285 06:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDliNzBjZTFlY2U5NTMyMDliZTU2ODQ2YjM3MWYyOGLVegYT: 00:27:40.285 06:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: ]] 00:27:40.285 06:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: 00:27:40.285 06:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:40.286 06:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.286 06:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:40.286 06:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:40.286 06:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:40.286 06:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.286 06:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:40.286 06:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.286 06:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.286 06:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.286 06:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.286 06:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.286 06:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.286 06:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.286 06:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.286 06:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.286 06:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.286 06:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.286 06:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.286 06:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.286 06:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.286 06:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:40.286 06:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.286 06:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.852 nvme0n1 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU3YzNhYWY0ODBhZTNlNWI1M2MwYmNmMGZjYTU4NmZiNWVjNDFkZjJmM2E2NTM2w8gllQ==: 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU3YzNhYWY0ODBhZTNlNWI1M2MwYmNmMGZjYTU4NmZiNWVjNDFkZjJmM2E2NTM2w8gllQ==: 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: ]] 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.852 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.111 nvme0n1 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODM0NDFjZGRhZjQ1ZDM4NzJkODM2NzYyNDg5YmJmOWX2aN3y: 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODM0NDFjZGRhZjQ1ZDM4NzJkODM2NzYyNDg5YmJmOWX2aN3y: 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: ]] 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.111 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.369 nvme0n1 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTJmMTAzMmVlNWIyYjhhZTcxYTllOThiNzdlNzgwM2EzNWU0YjdjZGQyMzViZDJlpa2v4g==: 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTJmMTAzMmVlNWIyYjhhZTcxYTllOThiNzdlNzgwM2EzNWU0YjdjZGQyMzViZDJlpa2v4g==: 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: ]] 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.369 06:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.627 nvme0n1 00:27:41.627 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.627 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.627 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.627 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.627 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.627 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.627 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.627 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.627 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.627 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.886 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.886 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.886 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:41.886 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.886 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:41.886 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:41.886 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:41.886 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzhlODYxZjA3MTY4MDYwNTE2MzQzZWI0ZDczZWRiZWI5OTNiZjgyMTc2MGI0ZmZiYzA2NzkxNTVhZGM2ZjE4Zh3zEm4=: 00:27:41.886 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:41.886 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:41.886 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:41.886 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzhlODYxZjA3MTY4MDYwNTE2MzQzZWI0ZDczZWRiZWI5OTNiZjgyMTc2MGI0ZmZiYzA2NzkxNTVhZGM2ZjE4Zh3zEm4=: 00:27:41.886 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:41.886 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:41.886 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.886 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:41.886 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:41.886 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:41.886 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.886 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:41.886 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.886 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.886 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.886 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.886 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.886 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.886 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.886 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.886 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.886 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.886 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.886 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.886 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.887 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.887 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:41.887 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.887 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.147 nvme0n1 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDliNzBjZTFlY2U5NTMyMDliZTU2ODQ2YjM3MWYyOGLVegYT: 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDliNzBjZTFlY2U5NTMyMDliZTU2ODQ2YjM3MWYyOGLVegYT: 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: ]] 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.147 06:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.715 nvme0n1 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU3YzNhYWY0ODBhZTNlNWI1M2MwYmNmMGZjYTU4NmZiNWVjNDFkZjJmM2E2NTM2w8gllQ==: 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU3YzNhYWY0ODBhZTNlNWI1M2MwYmNmMGZjYTU4NmZiNWVjNDFkZjJmM2E2NTM2w8gllQ==: 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: ]] 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.715 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.286 nvme0n1 00:27:43.286 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.286 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.286 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.286 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.286 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.286 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.286 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.286 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.286 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.286 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.286 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.286 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.286 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:43.286 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.286 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:43.286 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:43.286 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:43.286 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODM0NDFjZGRhZjQ1ZDM4NzJkODM2NzYyNDg5YmJmOWX2aN3y: 00:27:43.286 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: 00:27:43.286 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:43.286 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:43.286 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODM0NDFjZGRhZjQ1ZDM4NzJkODM2NzYyNDg5YmJmOWX2aN3y: 00:27:43.286 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: ]] 00:27:43.286 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: 00:27:43.286 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:43.287 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.287 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:43.287 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:43.287 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:43.287 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.287 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:43.287 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.287 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.287 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.287 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.287 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.287 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.287 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.287 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.287 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.287 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.287 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.287 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.287 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.287 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.287 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:43.287 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.287 06:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.860 nvme0n1 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTJmMTAzMmVlNWIyYjhhZTcxYTllOThiNzdlNzgwM2EzNWU0YjdjZGQyMzViZDJlpa2v4g==: 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTJmMTAzMmVlNWIyYjhhZTcxYTllOThiNzdlNzgwM2EzNWU0YjdjZGQyMzViZDJlpa2v4g==: 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: ]] 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.860 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.861 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.861 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.861 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.861 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:43.861 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.861 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.432 nvme0n1 00:27:44.432 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.432 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.432 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.432 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.432 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.432 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.432 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.432 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.432 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.432 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.432 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.432 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.432 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:44.432 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.432 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:44.432 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:44.432 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:44.432 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzhlODYxZjA3MTY4MDYwNTE2MzQzZWI0ZDczZWRiZWI5OTNiZjgyMTc2MGI0ZmZiYzA2NzkxNTVhZGM2ZjE4Zh3zEm4=: 00:27:44.432 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:44.432 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:44.432 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:44.433 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzhlODYxZjA3MTY4MDYwNTE2MzQzZWI0ZDczZWRiZWI5OTNiZjgyMTc2MGI0ZmZiYzA2NzkxNTVhZGM2ZjE4Zh3zEm4=: 00:27:44.433 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:44.433 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:44.433 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.433 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:44.433 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:44.433 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:44.433 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.433 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:44.433 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.433 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.433 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.433 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.433 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.433 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.433 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.433 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.433 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.433 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.433 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.433 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.433 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.433 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.433 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:44.433 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.433 06:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.694 nvme0n1 00:27:44.694 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.694 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.694 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.694 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.694 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.694 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.956 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.956 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.956 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.956 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.956 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.956 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:44.956 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:44.956 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.956 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:44.956 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.956 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.956 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:44.956 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:44.956 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDliNzBjZTFlY2U5NTMyMDliZTU2ODQ2YjM3MWYyOGLVegYT: 00:27:44.956 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: 00:27:44.956 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.956 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:44.956 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDliNzBjZTFlY2U5NTMyMDliZTU2ODQ2YjM3MWYyOGLVegYT: 00:27:44.956 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: ]] 00:27:44.956 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: 00:27:44.956 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:44.956 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.957 nvme0n1 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU3YzNhYWY0ODBhZTNlNWI1M2MwYmNmMGZjYTU4NmZiNWVjNDFkZjJmM2E2NTM2w8gllQ==: 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU3YzNhYWY0ODBhZTNlNWI1M2MwYmNmMGZjYTU4NmZiNWVjNDFkZjJmM2E2NTM2w8gllQ==: 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: ]] 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.957 nvme0n1 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.957 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.218 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.218 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.218 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.218 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.218 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.218 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.218 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.218 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:45.218 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.218 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.218 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:45.218 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:45.218 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODM0NDFjZGRhZjQ1ZDM4NzJkODM2NzYyNDg5YmJmOWX2aN3y: 00:27:45.218 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: 00:27:45.218 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.218 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:45.218 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODM0NDFjZGRhZjQ1ZDM4NzJkODM2NzYyNDg5YmJmOWX2aN3y: 00:27:45.218 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: ]] 00:27:45.218 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: 00:27:45.218 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:45.218 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.218 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.218 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:45.218 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:45.218 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.218 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:45.218 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.218 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.218 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.219 nvme0n1 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTJmMTAzMmVlNWIyYjhhZTcxYTllOThiNzdlNzgwM2EzNWU0YjdjZGQyMzViZDJlpa2v4g==: 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTJmMTAzMmVlNWIyYjhhZTcxYTllOThiNzdlNzgwM2EzNWU0YjdjZGQyMzViZDJlpa2v4g==: 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: ]] 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.219 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.481 nvme0n1 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzhlODYxZjA3MTY4MDYwNTE2MzQzZWI0ZDczZWRiZWI5OTNiZjgyMTc2MGI0ZmZiYzA2NzkxNTVhZGM2ZjE4Zh3zEm4=: 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzhlODYxZjA3MTY4MDYwNTE2MzQzZWI0ZDczZWRiZWI5OTNiZjgyMTc2MGI0ZmZiYzA2NzkxNTVhZGM2ZjE4Zh3zEm4=: 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.481 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.482 nvme0n1 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDliNzBjZTFlY2U5NTMyMDliZTU2ODQ2YjM3MWYyOGLVegYT: 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDliNzBjZTFlY2U5NTMyMDliZTU2ODQ2YjM3MWYyOGLVegYT: 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: ]] 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.482 06:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.744 nvme0n1 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU3YzNhYWY0ODBhZTNlNWI1M2MwYmNmMGZjYTU4NmZiNWVjNDFkZjJmM2E2NTM2w8gllQ==: 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU3YzNhYWY0ODBhZTNlNWI1M2MwYmNmMGZjYTU4NmZiNWVjNDFkZjJmM2E2NTM2w8gllQ==: 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: ]] 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.744 nvme0n1 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODM0NDFjZGRhZjQ1ZDM4NzJkODM2NzYyNDg5YmJmOWX2aN3y: 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODM0NDFjZGRhZjQ1ZDM4NzJkODM2NzYyNDg5YmJmOWX2aN3y: 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: ]] 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.744 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.007 nvme0n1 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTJmMTAzMmVlNWIyYjhhZTcxYTllOThiNzdlNzgwM2EzNWU0YjdjZGQyMzViZDJlpa2v4g==: 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTJmMTAzMmVlNWIyYjhhZTcxYTllOThiNzdlNzgwM2EzNWU0YjdjZGQyMzViZDJlpa2v4g==: 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: ]] 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.007 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.269 nvme0n1 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzhlODYxZjA3MTY4MDYwNTE2MzQzZWI0ZDczZWRiZWI5OTNiZjgyMTc2MGI0ZmZiYzA2NzkxNTVhZGM2ZjE4Zh3zEm4=: 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzhlODYxZjA3MTY4MDYwNTE2MzQzZWI0ZDczZWRiZWI5OTNiZjgyMTc2MGI0ZmZiYzA2NzkxNTVhZGM2ZjE4Zh3zEm4=: 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.269 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.270 nvme0n1 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDliNzBjZTFlY2U5NTMyMDliZTU2ODQ2YjM3MWYyOGLVegYT: 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDliNzBjZTFlY2U5NTMyMDliZTU2ODQ2YjM3MWYyOGLVegYT: 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: ]] 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.270 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.529 nvme0n1 00:27:46.529 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.529 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.529 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.529 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.529 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.529 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.529 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.529 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.529 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.529 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.529 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.529 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.529 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:46.529 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.530 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.530 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:46.530 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:46.530 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU3YzNhYWY0ODBhZTNlNWI1M2MwYmNmMGZjYTU4NmZiNWVjNDFkZjJmM2E2NTM2w8gllQ==: 00:27:46.530 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: 00:27:46.530 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.530 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:46.530 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU3YzNhYWY0ODBhZTNlNWI1M2MwYmNmMGZjYTU4NmZiNWVjNDFkZjJmM2E2NTM2w8gllQ==: 00:27:46.530 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: ]] 00:27:46.530 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: 00:27:46.530 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:46.530 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.530 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.530 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:46.530 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:46.530 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.530 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:46.530 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.530 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.530 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.530 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.530 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.530 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.530 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.530 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.530 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.530 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.530 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.530 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.530 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.530 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.530 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:46.530 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.530 06:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.788 nvme0n1 00:27:46.788 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.788 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.788 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.788 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.788 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.788 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.788 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.788 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.788 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.788 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.788 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.788 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.788 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:46.788 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.788 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.788 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:46.788 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:46.788 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODM0NDFjZGRhZjQ1ZDM4NzJkODM2NzYyNDg5YmJmOWX2aN3y: 00:27:46.788 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: 00:27:46.788 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.788 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:46.789 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODM0NDFjZGRhZjQ1ZDM4NzJkODM2NzYyNDg5YmJmOWX2aN3y: 00:27:46.789 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: ]] 00:27:46.789 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: 00:27:46.789 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:46.789 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.789 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.789 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:46.789 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:46.789 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.789 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:46.789 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.789 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.789 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.789 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.789 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.789 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.789 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.789 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.789 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.789 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.789 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.789 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.789 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.789 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.789 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:46.789 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.789 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.047 nvme0n1 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTJmMTAzMmVlNWIyYjhhZTcxYTllOThiNzdlNzgwM2EzNWU0YjdjZGQyMzViZDJlpa2v4g==: 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTJmMTAzMmVlNWIyYjhhZTcxYTllOThiNzdlNzgwM2EzNWU0YjdjZGQyMzViZDJlpa2v4g==: 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: ]] 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.047 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.306 nvme0n1 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzhlODYxZjA3MTY4MDYwNTE2MzQzZWI0ZDczZWRiZWI5OTNiZjgyMTc2MGI0ZmZiYzA2NzkxNTVhZGM2ZjE4Zh3zEm4=: 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzhlODYxZjA3MTY4MDYwNTE2MzQzZWI0ZDczZWRiZWI5OTNiZjgyMTc2MGI0ZmZiYzA2NzkxNTVhZGM2ZjE4Zh3zEm4=: 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.306 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.564 nvme0n1 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDliNzBjZTFlY2U5NTMyMDliZTU2ODQ2YjM3MWYyOGLVegYT: 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDliNzBjZTFlY2U5NTMyMDliZTU2ODQ2YjM3MWYyOGLVegYT: 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: ]] 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.564 06:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.822 nvme0n1 00:27:47.822 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.822 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.822 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.822 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.822 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.822 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.822 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.822 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.822 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.822 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.080 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.080 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.080 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:48.080 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.080 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:48.080 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:48.080 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:48.080 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU3YzNhYWY0ODBhZTNlNWI1M2MwYmNmMGZjYTU4NmZiNWVjNDFkZjJmM2E2NTM2w8gllQ==: 00:27:48.080 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: 00:27:48.080 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:48.080 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:48.080 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU3YzNhYWY0ODBhZTNlNWI1M2MwYmNmMGZjYTU4NmZiNWVjNDFkZjJmM2E2NTM2w8gllQ==: 00:27:48.080 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: ]] 00:27:48.080 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: 00:27:48.080 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:48.080 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.080 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:48.080 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:48.080 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:48.080 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.080 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:48.080 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.080 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.080 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.080 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.080 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.080 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.080 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.080 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.080 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.080 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.080 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.080 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.080 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.080 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.080 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:48.080 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.080 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.338 nvme0n1 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODM0NDFjZGRhZjQ1ZDM4NzJkODM2NzYyNDg5YmJmOWX2aN3y: 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODM0NDFjZGRhZjQ1ZDM4NzJkODM2NzYyNDg5YmJmOWX2aN3y: 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: ]] 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.338 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.339 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:48.339 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.339 06:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.613 nvme0n1 00:27:48.613 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.613 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.613 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.613 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.613 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.613 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.613 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.613 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.613 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.613 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.613 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.613 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.613 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:48.613 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.613 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:48.613 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:48.613 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:48.613 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTJmMTAzMmVlNWIyYjhhZTcxYTllOThiNzdlNzgwM2EzNWU0YjdjZGQyMzViZDJlpa2v4g==: 00:27:48.613 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: 00:27:48.613 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:48.613 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:48.613 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTJmMTAzMmVlNWIyYjhhZTcxYTllOThiNzdlNzgwM2EzNWU0YjdjZGQyMzViZDJlpa2v4g==: 00:27:48.613 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: ]] 00:27:48.613 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: 00:27:48.613 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:48.613 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.613 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:48.613 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:48.613 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:48.613 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.613 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:48.613 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.613 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.613 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.872 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.872 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.872 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.872 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.872 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.872 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.872 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.872 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.872 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.872 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.872 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.872 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:48.872 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.872 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.130 nvme0n1 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzhlODYxZjA3MTY4MDYwNTE2MzQzZWI0ZDczZWRiZWI5OTNiZjgyMTc2MGI0ZmZiYzA2NzkxNTVhZGM2ZjE4Zh3zEm4=: 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzhlODYxZjA3MTY4MDYwNTE2MzQzZWI0ZDczZWRiZWI5OTNiZjgyMTc2MGI0ZmZiYzA2NzkxNTVhZGM2ZjE4Zh3zEm4=: 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.130 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.131 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.131 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.131 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:49.131 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.131 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.389 nvme0n1 00:27:49.389 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.389 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.389 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.389 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.389 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.389 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.389 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.389 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.389 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.389 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.389 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.389 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:49.389 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.389 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:49.389 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.389 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:49.389 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:49.389 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:49.389 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDliNzBjZTFlY2U5NTMyMDliZTU2ODQ2YjM3MWYyOGLVegYT: 00:27:49.389 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: 00:27:49.389 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:49.389 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:49.389 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDliNzBjZTFlY2U5NTMyMDliZTU2ODQ2YjM3MWYyOGLVegYT: 00:27:49.389 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: ]] 00:27:49.390 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: 00:27:49.390 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:49.390 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.390 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:49.390 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:49.390 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:49.390 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.390 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:49.390 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.390 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.390 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.390 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.390 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.390 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.390 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.390 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.390 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.390 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.390 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.390 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.390 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.390 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.390 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:49.390 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.390 06:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.957 nvme0n1 00:27:49.957 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.957 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.957 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.957 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.957 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU3YzNhYWY0ODBhZTNlNWI1M2MwYmNmMGZjYTU4NmZiNWVjNDFkZjJmM2E2NTM2w8gllQ==: 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU3YzNhYWY0ODBhZTNlNWI1M2MwYmNmMGZjYTU4NmZiNWVjNDFkZjJmM2E2NTM2w8gllQ==: 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: ]] 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.958 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.528 nvme0n1 00:27:50.528 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.528 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.528 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.528 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODM0NDFjZGRhZjQ1ZDM4NzJkODM2NzYyNDg5YmJmOWX2aN3y: 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODM0NDFjZGRhZjQ1ZDM4NzJkODM2NzYyNDg5YmJmOWX2aN3y: 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: ]] 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.529 06:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.101 nvme0n1 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTJmMTAzMmVlNWIyYjhhZTcxYTllOThiNzdlNzgwM2EzNWU0YjdjZGQyMzViZDJlpa2v4g==: 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTJmMTAzMmVlNWIyYjhhZTcxYTllOThiNzdlNzgwM2EzNWU0YjdjZGQyMzViZDJlpa2v4g==: 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: ]] 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.101 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.673 nvme0n1 00:27:51.673 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.673 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.673 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.673 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.673 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.673 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.673 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.673 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.673 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.673 06:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.673 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.673 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.673 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:51.673 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.673 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:51.673 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:51.673 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:51.673 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzhlODYxZjA3MTY4MDYwNTE2MzQzZWI0ZDczZWRiZWI5OTNiZjgyMTc2MGI0ZmZiYzA2NzkxNTVhZGM2ZjE4Zh3zEm4=: 00:27:51.673 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:51.673 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:51.673 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:51.673 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzhlODYxZjA3MTY4MDYwNTE2MzQzZWI0ZDczZWRiZWI5OTNiZjgyMTc2MGI0ZmZiYzA2NzkxNTVhZGM2ZjE4Zh3zEm4=: 00:27:51.673 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:51.673 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:51.673 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.673 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:51.673 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:51.673 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:51.673 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.673 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:51.673 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.673 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.673 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.673 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.673 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.673 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.673 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.673 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.673 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.673 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.673 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.673 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.673 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.673 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.673 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:51.673 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.673 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.240 nvme0n1 00:27:52.240 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.240 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.240 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.240 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.240 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.240 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.240 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.240 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.240 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.240 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.240 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.240 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:52.240 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:52.240 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.240 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:52.240 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.240 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.240 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:52.240 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:52.240 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDliNzBjZTFlY2U5NTMyMDliZTU2ODQ2YjM3MWYyOGLVegYT: 00:27:52.240 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: 00:27:52.240 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.240 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:52.240 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDliNzBjZTFlY2U5NTMyMDliZTU2ODQ2YjM3MWYyOGLVegYT: 00:27:52.240 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: ]] 00:27:52.240 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: 00:27:52.240 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:52.240 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.240 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.240 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:52.240 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:52.240 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.240 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:52.240 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.240 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.240 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.241 nvme0n1 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU3YzNhYWY0ODBhZTNlNWI1M2MwYmNmMGZjYTU4NmZiNWVjNDFkZjJmM2E2NTM2w8gllQ==: 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU3YzNhYWY0ODBhZTNlNWI1M2MwYmNmMGZjYTU4NmZiNWVjNDFkZjJmM2E2NTM2w8gllQ==: 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: ]] 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.241 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.500 nvme0n1 00:27:52.500 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.500 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.500 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.500 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.500 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.500 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.500 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.500 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.500 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.500 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.500 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.500 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.500 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:52.500 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.500 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.500 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:52.500 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODM0NDFjZGRhZjQ1ZDM4NzJkODM2NzYyNDg5YmJmOWX2aN3y: 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODM0NDFjZGRhZjQ1ZDM4NzJkODM2NzYyNDg5YmJmOWX2aN3y: 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: ]] 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.501 nvme0n1 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTJmMTAzMmVlNWIyYjhhZTcxYTllOThiNzdlNzgwM2EzNWU0YjdjZGQyMzViZDJlpa2v4g==: 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTJmMTAzMmVlNWIyYjhhZTcxYTllOThiNzdlNzgwM2EzNWU0YjdjZGQyMzViZDJlpa2v4g==: 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: ]] 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.501 06:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.760 nvme0n1 00:27:52.760 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.760 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.760 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.760 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.760 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.760 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.760 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.760 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.760 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.760 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.760 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.760 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.760 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:52.760 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.760 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.760 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:52.760 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:52.760 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzhlODYxZjA3MTY4MDYwNTE2MzQzZWI0ZDczZWRiZWI5OTNiZjgyMTc2MGI0ZmZiYzA2NzkxNTVhZGM2ZjE4Zh3zEm4=: 00:27:52.760 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:52.760 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.760 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:52.760 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzhlODYxZjA3MTY4MDYwNTE2MzQzZWI0ZDczZWRiZWI5OTNiZjgyMTc2MGI0ZmZiYzA2NzkxNTVhZGM2ZjE4Zh3zEm4=: 00:27:52.760 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:52.760 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:52.760 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.760 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.760 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:52.760 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:52.760 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.760 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:52.760 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.760 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.760 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.760 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.760 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.760 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.760 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.761 nvme0n1 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDliNzBjZTFlY2U5NTMyMDliZTU2ODQ2YjM3MWYyOGLVegYT: 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDliNzBjZTFlY2U5NTMyMDliZTU2ODQ2YjM3MWYyOGLVegYT: 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: ]] 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.761 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.020 nvme0n1 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU3YzNhYWY0ODBhZTNlNWI1M2MwYmNmMGZjYTU4NmZiNWVjNDFkZjJmM2E2NTM2w8gllQ==: 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU3YzNhYWY0ODBhZTNlNWI1M2MwYmNmMGZjYTU4NmZiNWVjNDFkZjJmM2E2NTM2w8gllQ==: 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: ]] 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.020 nvme0n1 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.020 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.278 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.278 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.278 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.278 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.278 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.278 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.278 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.278 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:53.278 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.278 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.278 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:53.278 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:53.278 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODM0NDFjZGRhZjQ1ZDM4NzJkODM2NzYyNDg5YmJmOWX2aN3y: 00:27:53.278 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: 00:27:53.278 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.278 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:53.278 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODM0NDFjZGRhZjQ1ZDM4NzJkODM2NzYyNDg5YmJmOWX2aN3y: 00:27:53.278 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: ]] 00:27:53.278 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.279 nvme0n1 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTJmMTAzMmVlNWIyYjhhZTcxYTllOThiNzdlNzgwM2EzNWU0YjdjZGQyMzViZDJlpa2v4g==: 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTJmMTAzMmVlNWIyYjhhZTcxYTllOThiNzdlNzgwM2EzNWU0YjdjZGQyMzViZDJlpa2v4g==: 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: ]] 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.279 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.538 nvme0n1 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzhlODYxZjA3MTY4MDYwNTE2MzQzZWI0ZDczZWRiZWI5OTNiZjgyMTc2MGI0ZmZiYzA2NzkxNTVhZGM2ZjE4Zh3zEm4=: 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzhlODYxZjA3MTY4MDYwNTE2MzQzZWI0ZDczZWRiZWI5OTNiZjgyMTc2MGI0ZmZiYzA2NzkxNTVhZGM2ZjE4Zh3zEm4=: 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.538 06:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.538 nvme0n1 00:27:53.538 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.538 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.538 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.538 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.538 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.538 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.538 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.538 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.538 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.538 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.800 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.800 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:53.800 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.800 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:53.800 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.800 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.800 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:53.800 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:53.800 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDliNzBjZTFlY2U5NTMyMDliZTU2ODQ2YjM3MWYyOGLVegYT: 00:27:53.800 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: 00:27:53.800 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.800 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:53.800 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDliNzBjZTFlY2U5NTMyMDliZTU2ODQ2YjM3MWYyOGLVegYT: 00:27:53.800 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: ]] 00:27:53.800 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: 00:27:53.800 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:53.800 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.800 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.800 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:53.800 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:53.800 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.800 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:53.800 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.800 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.800 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.800 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.800 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.800 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.800 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.800 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.800 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.800 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.800 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.801 nvme0n1 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU3YzNhYWY0ODBhZTNlNWI1M2MwYmNmMGZjYTU4NmZiNWVjNDFkZjJmM2E2NTM2w8gllQ==: 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU3YzNhYWY0ODBhZTNlNWI1M2MwYmNmMGZjYTU4NmZiNWVjNDFkZjJmM2E2NTM2w8gllQ==: 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: ]] 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.801 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.061 nvme0n1 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODM0NDFjZGRhZjQ1ZDM4NzJkODM2NzYyNDg5YmJmOWX2aN3y: 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODM0NDFjZGRhZjQ1ZDM4NzJkODM2NzYyNDg5YmJmOWX2aN3y: 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: ]] 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.061 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.321 nvme0n1 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTJmMTAzMmVlNWIyYjhhZTcxYTllOThiNzdlNzgwM2EzNWU0YjdjZGQyMzViZDJlpa2v4g==: 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTJmMTAzMmVlNWIyYjhhZTcxYTllOThiNzdlNzgwM2EzNWU0YjdjZGQyMzViZDJlpa2v4g==: 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: ]] 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.321 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.322 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.322 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:54.322 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.322 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.583 nvme0n1 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzhlODYxZjA3MTY4MDYwNTE2MzQzZWI0ZDczZWRiZWI5OTNiZjgyMTc2MGI0ZmZiYzA2NzkxNTVhZGM2ZjE4Zh3zEm4=: 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzhlODYxZjA3MTY4MDYwNTE2MzQzZWI0ZDczZWRiZWI5OTNiZjgyMTc2MGI0ZmZiYzA2NzkxNTVhZGM2ZjE4Zh3zEm4=: 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.583 06:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.843 nvme0n1 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDliNzBjZTFlY2U5NTMyMDliZTU2ODQ2YjM3MWYyOGLVegYT: 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDliNzBjZTFlY2U5NTMyMDliZTU2ODQ2YjM3MWYyOGLVegYT: 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: ]] 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.843 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.133 nvme0n1 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU3YzNhYWY0ODBhZTNlNWI1M2MwYmNmMGZjYTU4NmZiNWVjNDFkZjJmM2E2NTM2w8gllQ==: 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU3YzNhYWY0ODBhZTNlNWI1M2MwYmNmMGZjYTU4NmZiNWVjNDFkZjJmM2E2NTM2w8gllQ==: 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: ]] 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.133 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.392 nvme0n1 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODM0NDFjZGRhZjQ1ZDM4NzJkODM2NzYyNDg5YmJmOWX2aN3y: 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODM0NDFjZGRhZjQ1ZDM4NzJkODM2NzYyNDg5YmJmOWX2aN3y: 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: ]] 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.392 06:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.652 nvme0n1 00:27:55.652 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.652 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.652 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.652 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.652 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.652 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTJmMTAzMmVlNWIyYjhhZTcxYTllOThiNzdlNzgwM2EzNWU0YjdjZGQyMzViZDJlpa2v4g==: 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTJmMTAzMmVlNWIyYjhhZTcxYTllOThiNzdlNzgwM2EzNWU0YjdjZGQyMzViZDJlpa2v4g==: 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: ]] 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.911 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.170 nvme0n1 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzhlODYxZjA3MTY4MDYwNTE2MzQzZWI0ZDczZWRiZWI5OTNiZjgyMTc2MGI0ZmZiYzA2NzkxNTVhZGM2ZjE4Zh3zEm4=: 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzhlODYxZjA3MTY4MDYwNTE2MzQzZWI0ZDczZWRiZWI5OTNiZjgyMTc2MGI0ZmZiYzA2NzkxNTVhZGM2ZjE4Zh3zEm4=: 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.170 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.171 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.171 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.171 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.171 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.171 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:56.171 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.171 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.430 nvme0n1 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDliNzBjZTFlY2U5NTMyMDliZTU2ODQ2YjM3MWYyOGLVegYT: 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDliNzBjZTFlY2U5NTMyMDliZTU2ODQ2YjM3MWYyOGLVegYT: 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: ]] 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTVjYWY3MGU0NDliNDkzYWZhMGM5NzQ2ZjQyNjExOGNmZTNiYzUwZDNjNzg2OGVkZTFlMGY4MzQ2ZjUyNTU3OZL4gEM=: 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.430 06:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.997 nvme0n1 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU3YzNhYWY0ODBhZTNlNWI1M2MwYmNmMGZjYTU4NmZiNWVjNDFkZjJmM2E2NTM2w8gllQ==: 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU3YzNhYWY0ODBhZTNlNWI1M2MwYmNmMGZjYTU4NmZiNWVjNDFkZjJmM2E2NTM2w8gllQ==: 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: ]] 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.997 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.564 nvme0n1 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODM0NDFjZGRhZjQ1ZDM4NzJkODM2NzYyNDg5YmJmOWX2aN3y: 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODM0NDFjZGRhZjQ1ZDM4NzJkODM2NzYyNDg5YmJmOWX2aN3y: 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: ]] 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.564 06:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.130 nvme0n1 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTJmMTAzMmVlNWIyYjhhZTcxYTllOThiNzdlNzgwM2EzNWU0YjdjZGQyMzViZDJlpa2v4g==: 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTJmMTAzMmVlNWIyYjhhZTcxYTllOThiNzdlNzgwM2EzNWU0YjdjZGQyMzViZDJlpa2v4g==: 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: ]] 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGExYjIwM2RjZDgxMmU1NzBlNGRlOWI4MDVmOGEyY2Q8uW+D: 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.130 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.131 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.131 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.131 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.131 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.131 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.131 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.131 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:58.131 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.131 06:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.699 nvme0n1 00:27:58.699 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.699 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.699 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.699 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.699 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.699 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.699 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.699 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.699 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.700 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.700 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.700 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.700 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:58.700 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.700 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:58.700 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:58.700 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:58.700 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzhlODYxZjA3MTY4MDYwNTE2MzQzZWI0ZDczZWRiZWI5OTNiZjgyMTc2MGI0ZmZiYzA2NzkxNTVhZGM2ZjE4Zh3zEm4=: 00:27:58.700 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:58.700 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:58.700 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:58.700 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzhlODYxZjA3MTY4MDYwNTE2MzQzZWI0ZDczZWRiZWI5OTNiZjgyMTc2MGI0ZmZiYzA2NzkxNTVhZGM2ZjE4Zh3zEm4=: 00:27:58.700 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:58.700 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:58.700 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.700 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:58.700 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:58.700 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:58.700 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.700 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:58.700 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.700 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.700 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.700 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.700 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.700 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.700 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.700 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.700 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.700 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.700 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.700 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.700 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.700 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.700 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:58.700 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.700 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.362 nvme0n1 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU3YzNhYWY0ODBhZTNlNWI1M2MwYmNmMGZjYTU4NmZiNWVjNDFkZjJmM2E2NTM2w8gllQ==: 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU3YzNhYWY0ODBhZTNlNWI1M2MwYmNmMGZjYTU4NmZiNWVjNDFkZjJmM2E2NTM2w8gllQ==: 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: ]] 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.363 2024/11/27 06:30:18 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:27:59.363 request: 00:27:59.363 { 00:27:59.363 "method": "bdev_nvme_attach_controller", 00:27:59.363 "params": { 00:27:59.363 "name": "nvme0", 00:27:59.363 "trtype": "tcp", 00:27:59.363 "traddr": "10.0.0.1", 00:27:59.363 "adrfam": "ipv4", 00:27:59.363 "trsvcid": "4420", 00:27:59.363 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:59.363 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:59.363 "prchk_reftag": false, 00:27:59.363 "prchk_guard": false, 00:27:59.363 "hdgst": false, 00:27:59.363 "ddgst": false, 00:27:59.363 "allow_unrecognized_csi": false 00:27:59.363 } 00:27:59.363 } 00:27:59.363 Got JSON-RPC error response 00:27:59.363 GoRPCClient: error on JSON-RPC call 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.363 2024/11/27 06:30:18 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:27:59.363 request: 00:27:59.363 { 00:27:59.363 "method": "bdev_nvme_attach_controller", 00:27:59.363 "params": { 00:27:59.363 "name": "nvme0", 00:27:59.363 "trtype": "tcp", 00:27:59.363 "traddr": "10.0.0.1", 00:27:59.363 "adrfam": "ipv4", 00:27:59.363 "trsvcid": "4420", 00:27:59.363 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:59.363 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:59.363 "prchk_reftag": false, 00:27:59.363 "prchk_guard": false, 00:27:59.363 "hdgst": false, 00:27:59.363 "ddgst": false, 00:27:59.363 "dhchap_key": "key2", 00:27:59.363 "allow_unrecognized_csi": false 00:27:59.363 } 00:27:59.363 } 00:27:59.363 Got JSON-RPC error response 00:27:59.363 GoRPCClient: error on JSON-RPC call 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:59.363 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.364 2024/11/27 06:30:18 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:27:59.364 request: 00:27:59.364 { 00:27:59.364 "method": "bdev_nvme_attach_controller", 00:27:59.364 "params": { 00:27:59.364 "name": "nvme0", 00:27:59.364 "trtype": "tcp", 00:27:59.364 "traddr": "10.0.0.1", 00:27:59.364 "adrfam": "ipv4", 00:27:59.364 "trsvcid": "4420", 00:27:59.364 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:59.364 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:59.364 "prchk_reftag": false, 00:27:59.364 "prchk_guard": false, 00:27:59.364 "hdgst": false, 00:27:59.364 "ddgst": false, 00:27:59.364 "dhchap_key": "key1", 00:27:59.364 "dhchap_ctrlr_key": "ckey2", 00:27:59.364 "allow_unrecognized_csi": false 00:27:59.364 } 00:27:59.364 } 00:27:59.364 Got JSON-RPC error response 00:27:59.364 GoRPCClient: error on JSON-RPC call 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.364 nvme0n1 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODM0NDFjZGRhZjQ1ZDM4NzJkODM2NzYyNDg5YmJmOWX2aN3y: 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODM0NDFjZGRhZjQ1ZDM4NzJkODM2NzYyNDg5YmJmOWX2aN3y: 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: ]] 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.364 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.626 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.626 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.626 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:59.626 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:59.626 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:59.626 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:59.626 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:59.626 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:59.626 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:59.626 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:59.626 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.626 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.626 2024/11/27 06:30:18 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey2 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-5 Msg=Input/output error 00:27:59.626 request: 00:27:59.626 { 00:27:59.626 "method": "bdev_nvme_set_keys", 00:27:59.626 "params": { 00:27:59.626 "name": "nvme0", 00:27:59.626 "dhchap_key": "key1", 00:27:59.626 "dhchap_ctrlr_key": "ckey2" 00:27:59.626 } 00:27:59.626 } 00:27:59.626 Got JSON-RPC error response 00:27:59.626 GoRPCClient: error on JSON-RPC call 00:27:59.626 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:59.626 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:59.626 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:59.626 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:59.626 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:59.626 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.626 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.626 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.626 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:59.626 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.626 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:59.626 06:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:00.564 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.564 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:00.564 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.564 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.564 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.564 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:28:00.564 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:00.564 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.564 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:00.564 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:00.564 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:00.564 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWU3YzNhYWY0ODBhZTNlNWI1M2MwYmNmMGZjYTU4NmZiNWVjNDFkZjJmM2E2NTM2w8gllQ==: 00:28:00.564 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: 00:28:00.564 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:00.564 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:00.564 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWU3YzNhYWY0ODBhZTNlNWI1M2MwYmNmMGZjYTU4NmZiNWVjNDFkZjJmM2E2NTM2w8gllQ==: 00:28:00.564 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: ]] 00:28:00.564 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU0ZmQyZjI5MmU4NWQ3NGEyNjg2MjQ2MDE0OWMxMzM1NzU4YzFhMDk4ZDNjZjVhfR8FMw==: 00:28:00.564 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:28:00.564 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.564 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.564 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.564 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.564 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.564 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.564 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.564 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.564 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.564 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.564 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:00.564 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.564 06:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.564 nvme0n1 00:28:00.564 06:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.564 06:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:00.564 06:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.564 06:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:00.564 06:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:00.564 06:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:00.564 06:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODM0NDFjZGRhZjQ1ZDM4NzJkODM2NzYyNDg5YmJmOWX2aN3y: 00:28:00.564 06:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: 00:28:00.564 06:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:00.564 06:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:00.564 06:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODM0NDFjZGRhZjQ1ZDM4NzJkODM2NzYyNDg5YmJmOWX2aN3y: 00:28:00.564 06:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: ]] 00:28:00.564 06:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2FiZDI5YzMwZmMxYmU1OGUzMjQ0MjIxODAwYzBhMjeAtnup: 00:28:00.564 06:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:00.564 06:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:00.564 06:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:00.564 06:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:00.564 06:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:00.564 06:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:00.564 06:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:00.564 06:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:00.564 06:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.564 06:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.822 2024/11/27 06:30:20 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey1 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:28:00.822 request: 00:28:00.822 { 00:28:00.822 "method": "bdev_nvme_set_keys", 00:28:00.822 "params": { 00:28:00.822 "name": "nvme0", 00:28:00.822 "dhchap_key": "key2", 00:28:00.822 "dhchap_ctrlr_key": "ckey1" 00:28:00.822 } 00:28:00.822 } 00:28:00.822 Got JSON-RPC error response 00:28:00.822 GoRPCClient: error on JSON-RPC call 00:28:00.822 06:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:00.822 06:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:00.822 06:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:00.822 06:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:00.822 06:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:00.822 06:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:00.822 06:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.822 06:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.822 06:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.822 06:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.822 06:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:28:00.822 06:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:28:01.755 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.755 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:01.755 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.755 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.755 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.755 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:28:01.755 06:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:28:02.689 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.689 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.689 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:02.689 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.689 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.689 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:28:02.689 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:28:02.690 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:28:02.690 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:02.690 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:02.690 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:28:02.690 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:02.690 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:28:02.690 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:02.690 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:02.690 rmmod nvme_tcp 00:28:02.948 rmmod nvme_fabrics 00:28:02.948 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:02.948 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:28:02.948 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:28:02.948 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 111406 ']' 00:28:02.948 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 111406 00:28:02.948 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 111406 ']' 00:28:02.948 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 111406 00:28:02.948 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:28:02.948 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:02.948 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111406 00:28:02.948 killing process with pid 111406 00:28:02.948 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:02.948 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:02.948 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111406' 00:28:02.948 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 111406 00:28:02.948 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 111406 00:28:02.948 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:02.948 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:02.948 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:02.948 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:28:02.948 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:28:02.948 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:02.948 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:28:02.948 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:02.948 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:02.948 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:02.948 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:02.948 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:02.948 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:02.948 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:02.948 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:02.948 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:02.948 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:02.948 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:03.207 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:03.207 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:03.207 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:03.207 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:03.207 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:03.207 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.207 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:03.207 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.207 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:28:03.207 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:03.207 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:03.207 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:03.207 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:03.207 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:28:03.207 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:03.207 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:03.207 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:03.207 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:03.207 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:03.207 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:03.207 06:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:03.771 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:03.771 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:04.028 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:04.028 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.BzC /tmp/spdk.key-null.10V /tmp/spdk.key-sha256.UsE /tmp/spdk.key-sha384.Cgt /tmp/spdk.key-sha512.Qil /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:28:04.028 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:04.285 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:04.285 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:28:04.285 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:28:04.285 ************************************ 00:28:04.285 END TEST nvmf_auth_host 00:28:04.285 ************************************ 00:28:04.285 00:28:04.285 real 0m32.628s 00:28:04.285 user 0m29.379s 00:28:04.285 sys 0m2.890s 00:28:04.285 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:04.285 06:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.285 06:30:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:04.285 06:30:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:04.285 06:30:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:04.285 06:30:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:04.285 06:30:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.285 ************************************ 00:28:04.285 START TEST nvmf_digest 00:28:04.285 ************************************ 00:28:04.285 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:04.544 * Looking for test storage... 00:28:04.544 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:04.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.544 --rc genhtml_branch_coverage=1 00:28:04.544 --rc genhtml_function_coverage=1 00:28:04.544 --rc genhtml_legend=1 00:28:04.544 --rc geninfo_all_blocks=1 00:28:04.544 --rc geninfo_unexecuted_blocks=1 00:28:04.544 00:28:04.544 ' 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:04.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.544 --rc genhtml_branch_coverage=1 00:28:04.544 --rc genhtml_function_coverage=1 00:28:04.544 --rc genhtml_legend=1 00:28:04.544 --rc geninfo_all_blocks=1 00:28:04.544 --rc geninfo_unexecuted_blocks=1 00:28:04.544 00:28:04.544 ' 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:04.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.544 --rc genhtml_branch_coverage=1 00:28:04.544 --rc genhtml_function_coverage=1 00:28:04.544 --rc genhtml_legend=1 00:28:04.544 --rc geninfo_all_blocks=1 00:28:04.544 --rc geninfo_unexecuted_blocks=1 00:28:04.544 00:28:04.544 ' 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:04.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.544 --rc genhtml_branch_coverage=1 00:28:04.544 --rc genhtml_function_coverage=1 00:28:04.544 --rc genhtml_legend=1 00:28:04.544 --rc geninfo_all_blocks=1 00:28:04.544 --rc geninfo_unexecuted_blocks=1 00:28:04.544 00:28:04.544 ' 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:04.544 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:04.545 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:04.545 Cannot find device "nvmf_init_br" 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:04.545 Cannot find device "nvmf_init_br2" 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:04.545 Cannot find device "nvmf_tgt_br" 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:04.545 Cannot find device "nvmf_tgt_br2" 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:04.545 Cannot find device "nvmf_init_br" 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:28:04.545 06:30:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:04.545 Cannot find device "nvmf_init_br2" 00:28:04.545 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:28:04.545 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:04.545 Cannot find device "nvmf_tgt_br" 00:28:04.545 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:28:04.545 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:04.545 Cannot find device "nvmf_tgt_br2" 00:28:04.545 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:28:04.545 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:04.545 Cannot find device "nvmf_br" 00:28:04.546 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:28:04.546 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:04.546 Cannot find device "nvmf_init_if" 00:28:04.546 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:28:04.546 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:04.546 Cannot find device "nvmf_init_if2" 00:28:04.546 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:28:04.546 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:04.546 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:04.546 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:28:04.546 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:04.546 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:04.546 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:28:04.546 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:04.546 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:04.804 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:04.804 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:04.804 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:04.804 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:04.804 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:04.804 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:04.804 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:04.804 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:04.804 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:04.804 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:04.804 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:04.804 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:04.804 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:04.804 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:04.804 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:04.804 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:04.804 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:04.804 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:04.804 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:04.804 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:04.804 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:04.804 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:04.804 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:04.804 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:04.804 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:04.804 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:04.804 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:04.804 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:04.804 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:04.804 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:04.804 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:04.804 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:04.804 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:28:04.804 00:28:04.804 --- 10.0.0.3 ping statistics --- 00:28:04.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.804 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:28:04.804 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:04.805 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:04.805 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:28:04.805 00:28:04.805 --- 10.0.0.4 ping statistics --- 00:28:04.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.805 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:28:04.805 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:04.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:04.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:28:04.805 00:28:04.805 --- 10.0.0.1 ping statistics --- 00:28:04.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.805 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:28:04.805 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:04.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:04.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.030 ms 00:28:04.805 00:28:04.805 --- 10.0.0.2 ping statistics --- 00:28:04.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.805 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:28:04.805 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:04.805 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:28:04.805 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:04.805 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:04.805 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:04.805 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:04.805 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:04.805 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:04.805 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:04.805 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:04.805 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:04.805 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:04.805 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:04.805 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:04.805 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:04.805 ************************************ 00:28:04.805 START TEST nvmf_digest_clean 00:28:04.805 ************************************ 00:28:04.805 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:28:04.805 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:04.805 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:04.805 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:04.805 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:04.805 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:04.805 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:04.805 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:04.805 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:04.805 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=113043 00:28:04.805 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 113043 00:28:04.805 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:04.805 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 113043 ']' 00:28:04.805 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:04.805 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:04.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:04.805 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:04.805 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:04.805 06:30:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:04.805 [2024-11-27 06:30:24.294495] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:28:04.805 [2024-11-27 06:30:24.294543] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:05.063 [2024-11-27 06:30:24.409323] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:05.063 [2024-11-27 06:30:24.438285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.063 [2024-11-27 06:30:24.455755] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:05.063 [2024-11-27 06:30:24.455791] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:05.063 [2024-11-27 06:30:24.455797] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:05.063 [2024-11-27 06:30:24.455802] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:05.063 [2024-11-27 06:30:24.455806] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:05.063 [2024-11-27 06:30:24.456078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.998 06:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:05.998 06:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:05.998 06:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:05.998 06:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:05.998 06:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:05.998 06:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:05.998 06:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:05.998 06:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:05.998 06:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:05.998 06:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.998 06:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:05.998 null0 00:28:05.998 [2024-11-27 06:30:25.288624] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:05.998 [2024-11-27 06:30:25.312695] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:05.998 06:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.998 06:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:05.998 06:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:05.998 06:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:05.998 06:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:05.998 06:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:05.998 06:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:05.998 06:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:05.998 06:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=113089 00:28:05.998 06:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 113089 /var/tmp/bperf.sock 00:28:05.998 06:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 113089 ']' 00:28:05.998 06:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:05.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:05.998 06:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:05.998 06:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:05.998 06:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:05.998 06:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:05.998 06:30:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:05.998 [2024-11-27 06:30:25.359132] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:28:05.998 [2024-11-27 06:30:25.359191] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113089 ] 00:28:05.998 [2024-11-27 06:30:25.473416] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:05.998 [2024-11-27 06:30:25.494145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.998 [2024-11-27 06:30:25.513124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.933 06:30:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:06.933 06:30:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:06.933 06:30:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:06.933 06:30:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:06.933 06:30:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:07.192 06:30:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:07.192 06:30:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:07.450 nvme0n1 00:28:07.450 06:30:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:07.450 06:30:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:07.450 Running I/O for 2 seconds... 00:28:09.368 22289.00 IOPS, 87.07 MiB/s [2024-11-27T06:30:28.898Z] 22468.00 IOPS, 87.77 MiB/s 00:28:09.368 Latency(us) 00:28:09.368 [2024-11-27T06:30:28.898Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.368 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:09.368 nvme0n1 : 2.00 22486.04 87.84 0.00 0.00 5685.65 3327.21 18854.20 00:28:09.368 [2024-11-27T06:30:28.898Z] =================================================================================================================== 00:28:09.368 [2024-11-27T06:30:28.898Z] Total : 22486.04 87.84 0.00 0.00 5685.65 3327.21 18854.20 00:28:09.368 { 00:28:09.368 "results": [ 00:28:09.368 { 00:28:09.368 "job": "nvme0n1", 00:28:09.368 "core_mask": "0x2", 00:28:09.368 "workload": "randread", 00:28:09.368 "status": "finished", 00:28:09.368 "queue_depth": 128, 00:28:09.368 "io_size": 4096, 00:28:09.368 "runtime": 2.004088, 00:28:09.368 "iops": 22486.038537229902, 00:28:09.368 "mibps": 87.8360880360543, 00:28:09.368 "io_failed": 0, 00:28:09.368 "io_timeout": 0, 00:28:09.368 "avg_latency_us": 5685.645300086031, 00:28:09.368 "min_latency_us": 3327.2123076923076, 00:28:09.368 "max_latency_us": 18854.203076923077 00:28:09.368 } 00:28:09.368 ], 00:28:09.368 "core_count": 1 00:28:09.368 } 00:28:09.368 06:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:09.368 06:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:09.368 06:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:09.368 06:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:09.368 06:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:09.368 | select(.opcode=="crc32c") 00:28:09.368 | "\(.module_name) \(.executed)"' 00:28:09.627 06:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:09.627 06:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:09.627 06:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:09.627 06:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:09.627 06:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 113089 00:28:09.627 06:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 113089 ']' 00:28:09.627 06:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 113089 00:28:09.627 06:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:09.627 06:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:09.627 06:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 113089 00:28:09.627 06:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:09.627 06:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:09.627 06:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 113089' 00:28:09.627 killing process with pid 113089 00:28:09.627 06:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 113089 00:28:09.627 Received shutdown signal, test time was about 2.000000 seconds 00:28:09.627 00:28:09.627 Latency(us) 00:28:09.627 [2024-11-27T06:30:29.157Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.627 [2024-11-27T06:30:29.157Z] =================================================================================================================== 00:28:09.627 [2024-11-27T06:30:29.157Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:09.627 06:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 113089 00:28:09.884 06:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:09.884 06:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:09.884 06:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:09.884 06:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:09.884 06:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:09.884 06:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:09.884 06:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:09.884 06:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=113179 00:28:09.884 06:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 113179 /var/tmp/bperf.sock 00:28:09.884 06:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:09.884 06:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 113179 ']' 00:28:09.884 06:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:09.884 06:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:09.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:09.884 06:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:09.884 06:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:09.884 06:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:09.884 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:09.884 Zero copy mechanism will not be used. 00:28:09.884 [2024-11-27 06:30:29.224091] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:28:09.884 [2024-11-27 06:30:29.224153] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113179 ] 00:28:09.884 [2024-11-27 06:30:29.337745] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:09.884 [2024-11-27 06:30:29.365205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.884 [2024-11-27 06:30:29.381737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.817 06:30:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:10.817 06:30:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:10.817 06:30:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:10.817 06:30:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:10.817 06:30:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:11.075 06:30:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:11.075 06:30:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:11.075 nvme0n1 00:28:11.334 06:30:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:11.334 06:30:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:11.334 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:11.334 Zero copy mechanism will not be used. 00:28:11.334 Running I/O for 2 seconds... 00:28:13.213 10944.00 IOPS, 1368.00 MiB/s [2024-11-27T06:30:32.743Z] 11760.00 IOPS, 1470.00 MiB/s 00:28:13.213 Latency(us) 00:28:13.213 [2024-11-27T06:30:32.743Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:13.213 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:13.213 nvme0n1 : 2.00 11754.88 1469.36 0.00 0.00 1358.62 589.19 4234.63 00:28:13.213 [2024-11-27T06:30:32.743Z] =================================================================================================================== 00:28:13.213 [2024-11-27T06:30:32.743Z] Total : 11754.88 1469.36 0.00 0.00 1358.62 589.19 4234.63 00:28:13.213 { 00:28:13.213 "results": [ 00:28:13.213 { 00:28:13.213 "job": "nvme0n1", 00:28:13.213 "core_mask": "0x2", 00:28:13.213 "workload": "randread", 00:28:13.213 "status": "finished", 00:28:13.213 "queue_depth": 16, 00:28:13.213 "io_size": 131072, 00:28:13.213 "runtime": 2.002233, 00:28:13.213 "iops": 11754.875681301826, 00:28:13.213 "mibps": 1469.3594601627283, 00:28:13.213 "io_failed": 0, 00:28:13.213 "io_timeout": 0, 00:28:13.213 "avg_latency_us": 1358.621638864195, 00:28:13.213 "min_latency_us": 589.1938461538462, 00:28:13.213 "max_latency_us": 4234.633846153846 00:28:13.213 } 00:28:13.213 ], 00:28:13.213 "core_count": 1 00:28:13.213 } 00:28:13.213 06:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:13.213 06:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:13.213 06:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:13.213 06:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:13.213 06:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:13.213 | select(.opcode=="crc32c") 00:28:13.213 | "\(.module_name) \(.executed)"' 00:28:13.474 06:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:13.474 06:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:13.474 06:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:13.474 06:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:13.474 06:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 113179 00:28:13.474 06:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 113179 ']' 00:28:13.474 06:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 113179 00:28:13.474 06:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:13.474 06:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:13.474 06:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 113179 00:28:13.474 06:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:13.474 06:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:13.474 06:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 113179' 00:28:13.474 killing process with pid 113179 00:28:13.474 06:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 113179 00:28:13.474 Received shutdown signal, test time was about 2.000000 seconds 00:28:13.474 00:28:13.474 Latency(us) 00:28:13.474 [2024-11-27T06:30:33.004Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:13.474 [2024-11-27T06:30:33.004Z] =================================================================================================================== 00:28:13.474 [2024-11-27T06:30:33.004Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:13.474 06:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 113179 00:28:13.734 06:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:13.734 06:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:13.734 06:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:13.734 06:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:13.734 06:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:13.734 06:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:13.734 06:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:13.734 06:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=113263 00:28:13.734 06:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:13.734 06:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 113263 /var/tmp/bperf.sock 00:28:13.734 06:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 113263 ']' 00:28:13.734 06:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:13.734 06:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:13.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:13.734 06:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:13.734 06:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:13.734 06:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:13.734 [2024-11-27 06:30:33.081148] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:28:13.734 [2024-11-27 06:30:33.081202] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113263 ] 00:28:13.734 [2024-11-27 06:30:33.192308] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:13.734 [2024-11-27 06:30:33.217450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.734 [2024-11-27 06:30:33.233955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:13.994 06:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:13.994 06:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:13.994 06:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:13.994 06:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:13.994 06:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:13.994 06:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:13.994 06:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:14.255 nvme0n1 00:28:14.515 06:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:14.516 06:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:14.516 Running I/O for 2 seconds... 00:28:16.399 30149.00 IOPS, 117.77 MiB/s [2024-11-27T06:30:35.929Z] 30178.50 IOPS, 117.88 MiB/s 00:28:16.399 Latency(us) 00:28:16.399 [2024-11-27T06:30:35.929Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.399 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:16.399 nvme0n1 : 2.01 30173.95 117.87 0.00 0.00 4238.11 2205.54 7410.61 00:28:16.399 [2024-11-27T06:30:35.929Z] =================================================================================================================== 00:28:16.399 [2024-11-27T06:30:35.929Z] Total : 30173.95 117.87 0.00 0.00 4238.11 2205.54 7410.61 00:28:16.399 { 00:28:16.399 "results": [ 00:28:16.399 { 00:28:16.399 "job": "nvme0n1", 00:28:16.399 "core_mask": "0x2", 00:28:16.399 "workload": "randwrite", 00:28:16.399 "status": "finished", 00:28:16.399 "queue_depth": 128, 00:28:16.399 "io_size": 4096, 00:28:16.399 "runtime": 2.006201, 00:28:16.399 "iops": 30173.945681414774, 00:28:16.399 "mibps": 117.86697531802646, 00:28:16.399 "io_failed": 0, 00:28:16.399 "io_timeout": 0, 00:28:16.399 "avg_latency_us": 4238.105079350154, 00:28:16.399 "min_latency_us": 2205.5384615384614, 00:28:16.399 "max_latency_us": 7410.609230769231 00:28:16.399 } 00:28:16.399 ], 00:28:16.399 "core_count": 1 00:28:16.399 } 00:28:16.399 06:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:16.399 06:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:16.399 06:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:16.399 06:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:16.399 | select(.opcode=="crc32c") 00:28:16.399 | "\(.module_name) \(.executed)"' 00:28:16.399 06:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:16.661 06:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:16.661 06:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:16.661 06:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:16.661 06:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:16.661 06:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 113263 00:28:16.661 06:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 113263 ']' 00:28:16.661 06:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 113263 00:28:16.661 06:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:16.661 06:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:16.661 06:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 113263 00:28:16.661 killing process with pid 113263 00:28:16.661 Received shutdown signal, test time was about 2.000000 seconds 00:28:16.661 00:28:16.661 Latency(us) 00:28:16.661 [2024-11-27T06:30:36.191Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.661 [2024-11-27T06:30:36.191Z] =================================================================================================================== 00:28:16.661 [2024-11-27T06:30:36.191Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:16.661 06:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:16.661 06:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:16.661 06:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 113263' 00:28:16.661 06:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 113263 00:28:16.661 06:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 113263 00:28:16.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:16.922 06:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:16.922 06:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:16.922 06:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:16.922 06:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:16.922 06:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:16.922 06:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:16.922 06:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:16.922 06:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=113336 00:28:16.922 06:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 113336 /var/tmp/bperf.sock 00:28:16.922 06:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 113336 ']' 00:28:16.922 06:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:16.922 06:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:16.922 06:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:16.922 06:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:16.922 06:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:16.922 06:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:16.922 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:16.922 Zero copy mechanism will not be used. 00:28:16.922 [2024-11-27 06:30:36.269961] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:28:16.922 [2024-11-27 06:30:36.270022] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113336 ] 00:28:16.922 [2024-11-27 06:30:36.383561] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:16.922 [2024-11-27 06:30:36.408470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.922 [2024-11-27 06:30:36.425053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.865 06:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:17.865 06:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:17.865 06:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:17.865 06:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:17.865 06:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:18.127 06:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:18.127 06:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:18.388 nvme0n1 00:28:18.388 06:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:18.388 06:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:18.388 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:18.388 Zero copy mechanism will not be used. 00:28:18.388 Running I/O for 2 seconds... 00:28:20.268 11433.00 IOPS, 1429.12 MiB/s [2024-11-27T06:30:39.798Z] 11585.50 IOPS, 1448.19 MiB/s 00:28:20.268 Latency(us) 00:28:20.268 [2024-11-27T06:30:39.798Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.268 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:20.268 nvme0n1 : 2.00 11581.87 1447.73 0.00 0.00 1378.43 888.52 3528.86 00:28:20.268 [2024-11-27T06:30:39.798Z] =================================================================================================================== 00:28:20.268 [2024-11-27T06:30:39.798Z] Total : 11581.87 1447.73 0.00 0.00 1378.43 888.52 3528.86 00:28:20.268 { 00:28:20.268 "results": [ 00:28:20.268 { 00:28:20.268 "job": "nvme0n1", 00:28:20.268 "core_mask": "0x2", 00:28:20.268 "workload": "randwrite", 00:28:20.268 "status": "finished", 00:28:20.268 "queue_depth": 16, 00:28:20.268 "io_size": 131072, 00:28:20.268 "runtime": 2.00244, 00:28:20.268 "iops": 11581.870118455485, 00:28:20.268 "mibps": 1447.7337648069356, 00:28:20.268 "io_failed": 0, 00:28:20.268 "io_timeout": 0, 00:28:20.268 "avg_latency_us": 1378.4346390001858, 00:28:20.268 "min_latency_us": 888.516923076923, 00:28:20.268 "max_latency_us": 3528.8615384615387 00:28:20.268 } 00:28:20.268 ], 00:28:20.268 "core_count": 1 00:28:20.268 } 00:28:20.268 06:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:20.268 06:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:20.268 06:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:20.268 06:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:20.268 | select(.opcode=="crc32c") 00:28:20.268 | "\(.module_name) \(.executed)"' 00:28:20.268 06:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:20.529 06:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:20.529 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:20.529 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:20.529 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:20.529 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 113336 00:28:20.529 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 113336 ']' 00:28:20.529 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 113336 00:28:20.529 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:20.529 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:20.529 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 113336 00:28:20.529 killing process with pid 113336 00:28:20.529 Received shutdown signal, test time was about 2.000000 seconds 00:28:20.529 00:28:20.529 Latency(us) 00:28:20.529 [2024-11-27T06:30:40.059Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.529 [2024-11-27T06:30:40.059Z] =================================================================================================================== 00:28:20.529 [2024-11-27T06:30:40.059Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:20.529 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:20.530 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:20.530 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 113336' 00:28:20.530 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 113336 00:28:20.530 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 113336 00:28:20.791 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 113043 00:28:20.791 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 113043 ']' 00:28:20.791 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 113043 00:28:20.791 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:20.791 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:20.791 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 113043 00:28:20.791 killing process with pid 113043 00:28:20.791 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:20.791 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:20.791 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 113043' 00:28:20.791 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 113043 00:28:20.791 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 113043 00:28:20.791 ************************************ 00:28:20.791 END TEST nvmf_digest_clean 00:28:20.791 ************************************ 00:28:20.791 00:28:20.791 real 0m15.996s 00:28:20.791 user 0m30.861s 00:28:20.791 sys 0m3.382s 00:28:20.791 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:20.791 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:20.791 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:20.791 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:20.791 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:20.791 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:20.791 ************************************ 00:28:20.791 START TEST nvmf_digest_error 00:28:20.791 ************************************ 00:28:20.791 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:28:20.791 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:20.791 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:20.792 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:20.792 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:20.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:20.792 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=113444 00:28:20.792 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 113444 00:28:20.792 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 113444 ']' 00:28:20.792 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:20.792 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:20.792 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:20.792 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:20.792 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:20.792 06:30:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:21.053 [2024-11-27 06:30:40.351481] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:28:21.053 [2024-11-27 06:30:40.351640] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:21.053 [2024-11-27 06:30:40.466880] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:21.053 [2024-11-27 06:30:40.490438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.053 [2024-11-27 06:30:40.508500] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:21.053 [2024-11-27 06:30:40.508628] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:21.053 [2024-11-27 06:30:40.508675] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:21.053 [2024-11-27 06:30:40.508696] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:21.053 [2024-11-27 06:30:40.508717] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:21.053 [2024-11-27 06:30:40.508950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.005 06:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:22.005 06:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:22.005 06:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:22.005 06:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:22.005 06:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:22.005 06:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:22.005 06:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:22.005 06:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.005 06:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:22.005 [2024-11-27 06:30:41.257326] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:22.005 06:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.005 06:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:22.005 06:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:22.005 06:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.005 06:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:22.005 null0 00:28:22.005 [2024-11-27 06:30:41.324613] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:22.005 [2024-11-27 06:30:41.348676] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:22.005 06:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.005 06:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:22.005 06:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:22.005 06:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:22.005 06:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:22.005 06:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:22.005 06:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=113488 00:28:22.005 06:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 113488 /var/tmp/bperf.sock 00:28:22.005 06:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:22.005 06:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 113488 ']' 00:28:22.005 06:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:22.005 06:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:22.005 06:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:22.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:22.005 06:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:22.005 06:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:22.005 [2024-11-27 06:30:41.394210] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:28:22.005 [2024-11-27 06:30:41.394361] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113488 ] 00:28:22.005 [2024-11-27 06:30:41.508219] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:22.005 [2024-11-27 06:30:41.533074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.266 [2024-11-27 06:30:41.550408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.838 06:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:22.838 06:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:22.838 06:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:22.838 06:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:23.099 06:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:23.099 06:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.099 06:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:23.099 06:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.099 06:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:23.099 06:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:23.361 nvme0n1 00:28:23.361 06:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:23.361 06:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.361 06:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:23.361 06:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.361 06:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:23.361 06:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:23.361 Running I/O for 2 seconds... 00:28:23.361 [2024-11-27 06:30:42.869261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.361 [2024-11-27 06:30:42.869384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.361 [2024-11-27 06:30:42.869441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.361 [2024-11-27 06:30:42.879621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.361 [2024-11-27 06:30:42.879722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.361 [2024-11-27 06:30:42.879775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.361 [2024-11-27 06:30:42.888364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.361 [2024-11-27 06:30:42.888468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.361 [2024-11-27 06:30:42.888518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.623 [2024-11-27 06:30:42.898893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.623 [2024-11-27 06:30:42.898998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.623 [2024-11-27 06:30:42.899044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.623 [2024-11-27 06:30:42.909718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.623 [2024-11-27 06:30:42.909814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.623 [2024-11-27 06:30:42.909859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.623 [2024-11-27 06:30:42.919897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.623 [2024-11-27 06:30:42.919997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.623 [2024-11-27 06:30:42.920081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.623 [2024-11-27 06:30:42.929932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.623 [2024-11-27 06:30:42.929956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.623 [2024-11-27 06:30:42.929962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.623 [2024-11-27 06:30:42.940147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.623 [2024-11-27 06:30:42.940170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.623 [2024-11-27 06:30:42.940176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.623 [2024-11-27 06:30:42.950162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.623 [2024-11-27 06:30:42.950186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.623 [2024-11-27 06:30:42.950192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.623 [2024-11-27 06:30:42.960513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.623 [2024-11-27 06:30:42.960536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.623 [2024-11-27 06:30:42.960541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.623 [2024-11-27 06:30:42.970764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.623 [2024-11-27 06:30:42.970789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.623 [2024-11-27 06:30:42.970795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.623 [2024-11-27 06:30:42.979396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.623 [2024-11-27 06:30:42.979488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.623 [2024-11-27 06:30:42.979496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.623 [2024-11-27 06:30:42.991310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.623 [2024-11-27 06:30:42.991333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.623 [2024-11-27 06:30:42.991339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.623 [2024-11-27 06:30:43.001185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.623 [2024-11-27 06:30:43.001208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.623 [2024-11-27 06:30:43.001213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.624 [2024-11-27 06:30:43.009674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.624 [2024-11-27 06:30:43.009698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.624 [2024-11-27 06:30:43.009704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.624 [2024-11-27 06:30:43.019840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.624 [2024-11-27 06:30:43.019865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.624 [2024-11-27 06:30:43.019870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.624 [2024-11-27 06:30:43.030064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.624 [2024-11-27 06:30:43.030152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.624 [2024-11-27 06:30:43.030159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.624 [2024-11-27 06:30:43.039989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.624 [2024-11-27 06:30:43.040012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.624 [2024-11-27 06:30:43.040017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.624 [2024-11-27 06:30:43.049882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.624 [2024-11-27 06:30:43.049982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.624 [2024-11-27 06:30:43.049989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.624 [2024-11-27 06:30:43.060176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.624 [2024-11-27 06:30:43.060199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.624 [2024-11-27 06:30:43.060205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.624 [2024-11-27 06:30:43.070306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.624 [2024-11-27 06:30:43.070330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.624 [2024-11-27 06:30:43.070336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.624 [2024-11-27 06:30:43.081158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.624 [2024-11-27 06:30:43.081241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.624 [2024-11-27 06:30:43.081248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.624 [2024-11-27 06:30:43.091062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.624 [2024-11-27 06:30:43.091085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.624 [2024-11-27 06:30:43.091091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.624 [2024-11-27 06:30:43.100889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.624 [2024-11-27 06:30:43.100992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.624 [2024-11-27 06:30:43.100999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.624 [2024-11-27 06:30:43.110678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.624 [2024-11-27 06:30:43.110702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.624 [2024-11-27 06:30:43.110708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.624 [2024-11-27 06:30:43.120934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.624 [2024-11-27 06:30:43.120956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.624 [2024-11-27 06:30:43.120961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.624 [2024-11-27 06:30:43.130833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.624 [2024-11-27 06:30:43.130857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.624 [2024-11-27 06:30:43.130863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.624 [2024-11-27 06:30:43.139178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.624 [2024-11-27 06:30:43.139201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.624 [2024-11-27 06:30:43.139207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.624 [2024-11-27 06:30:43.149915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.624 [2024-11-27 06:30:43.149937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.624 [2024-11-27 06:30:43.149942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.886 [2024-11-27 06:30:43.160076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.886 [2024-11-27 06:30:43.160164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.886 [2024-11-27 06:30:43.160171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.886 [2024-11-27 06:30:43.170375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.886 [2024-11-27 06:30:43.170398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.886 [2024-11-27 06:30:43.170404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.886 [2024-11-27 06:30:43.181070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.886 [2024-11-27 06:30:43.181091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.886 [2024-11-27 06:30:43.181097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.886 [2024-11-27 06:30:43.191286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.886 [2024-11-27 06:30:43.191372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.886 [2024-11-27 06:30:43.191380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.886 [2024-11-27 06:30:43.201212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.886 [2024-11-27 06:30:43.201234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.886 [2024-11-27 06:30:43.201240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.886 [2024-11-27 06:30:43.211062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.886 [2024-11-27 06:30:43.211085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.886 [2024-11-27 06:30:43.211090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.886 [2024-11-27 06:30:43.221304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.886 [2024-11-27 06:30:43.221326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.886 [2024-11-27 06:30:43.221331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.886 [2024-11-27 06:30:43.231932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.886 [2024-11-27 06:30:43.231955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.886 [2024-11-27 06:30:43.231960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.886 [2024-11-27 06:30:43.240275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.886 [2024-11-27 06:30:43.240297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.886 [2024-11-27 06:30:43.240303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.886 [2024-11-27 06:30:43.250210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.886 [2024-11-27 06:30:43.250297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.886 [2024-11-27 06:30:43.250304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.887 [2024-11-27 06:30:43.260111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.887 [2024-11-27 06:30:43.260134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.887 [2024-11-27 06:30:43.260140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.887 [2024-11-27 06:30:43.270588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.887 [2024-11-27 06:30:43.270612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.887 [2024-11-27 06:30:43.270618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.887 [2024-11-27 06:30:43.280961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.887 [2024-11-27 06:30:43.280983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.887 [2024-11-27 06:30:43.280989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.887 [2024-11-27 06:30:43.290788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.887 [2024-11-27 06:30:43.290812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.887 [2024-11-27 06:30:43.290818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.887 [2024-11-27 06:30:43.300698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.887 [2024-11-27 06:30:43.300791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.887 [2024-11-27 06:30:43.300799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.887 [2024-11-27 06:30:43.311058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.887 [2024-11-27 06:30:43.311145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.887 [2024-11-27 06:30:43.311153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.887 [2024-11-27 06:30:43.321127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.887 [2024-11-27 06:30:43.321150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.887 [2024-11-27 06:30:43.321155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.887 [2024-11-27 06:30:43.331119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.887 [2024-11-27 06:30:43.331142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.887 [2024-11-27 06:30:43.331148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.887 [2024-11-27 06:30:43.341217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.887 [2024-11-27 06:30:43.341239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.887 [2024-11-27 06:30:43.341245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.887 [2024-11-27 06:30:43.351498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.887 [2024-11-27 06:30:43.351521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.887 [2024-11-27 06:30:43.351527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.887 [2024-11-27 06:30:43.362112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.887 [2024-11-27 06:30:43.362194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.887 [2024-11-27 06:30:43.362201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.887 [2024-11-27 06:30:43.372015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.887 [2024-11-27 06:30:43.372038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.887 [2024-11-27 06:30:43.372044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.887 [2024-11-27 06:30:43.381964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.887 [2024-11-27 06:30:43.381985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.887 [2024-11-27 06:30:43.381990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.887 [2024-11-27 06:30:43.392766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.887 [2024-11-27 06:30:43.392788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.887 [2024-11-27 06:30:43.392793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.887 [2024-11-27 06:30:43.401304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.887 [2024-11-27 06:30:43.401387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.887 [2024-11-27 06:30:43.401394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.887 [2024-11-27 06:30:43.411884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:23.887 [2024-11-27 06:30:43.411914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.887 [2024-11-27 06:30:43.411920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.149 [2024-11-27 06:30:43.422343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.149 [2024-11-27 06:30:43.422366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.149 [2024-11-27 06:30:43.422372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.149 [2024-11-27 06:30:43.432074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.149 [2024-11-27 06:30:43.432160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.149 [2024-11-27 06:30:43.432167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.149 [2024-11-27 06:30:43.442298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.149 [2024-11-27 06:30:43.442321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.149 [2024-11-27 06:30:43.442327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.149 [2024-11-27 06:30:43.452279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.149 [2024-11-27 06:30:43.452302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.149 [2024-11-27 06:30:43.452307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.149 [2024-11-27 06:30:43.462129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.149 [2024-11-27 06:30:43.462152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.149 [2024-11-27 06:30:43.462157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.149 [2024-11-27 06:30:43.471987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.149 [2024-11-27 06:30:43.472010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.149 [2024-11-27 06:30:43.472016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.149 [2024-11-27 06:30:43.481853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.149 [2024-11-27 06:30:43.481877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.149 [2024-11-27 06:30:43.481882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.149 [2024-11-27 06:30:43.491801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.149 [2024-11-27 06:30:43.491889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.149 [2024-11-27 06:30:43.491896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.149 [2024-11-27 06:30:43.501847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.149 [2024-11-27 06:30:43.501870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.149 [2024-11-27 06:30:43.501876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.149 [2024-11-27 06:30:43.510584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.149 [2024-11-27 06:30:43.510609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.149 [2024-11-27 06:30:43.510615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.149 [2024-11-27 06:30:43.520243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.149 [2024-11-27 06:30:43.520268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.150 [2024-11-27 06:30:43.520273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.150 [2024-11-27 06:30:43.530668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.150 [2024-11-27 06:30:43.530690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.150 [2024-11-27 06:30:43.530696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.150 [2024-11-27 06:30:43.541248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.150 [2024-11-27 06:30:43.541272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.150 [2024-11-27 06:30:43.541278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.150 [2024-11-27 06:30:43.550769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.150 [2024-11-27 06:30:43.550794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.150 [2024-11-27 06:30:43.550799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.150 [2024-11-27 06:30:43.561143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.150 [2024-11-27 06:30:43.561168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.150 [2024-11-27 06:30:43.561173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.150 [2024-11-27 06:30:43.570946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.150 [2024-11-27 06:30:43.570969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.150 [2024-11-27 06:30:43.570974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.150 [2024-11-27 06:30:43.581509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.150 [2024-11-27 06:30:43.581532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.150 [2024-11-27 06:30:43.581537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.150 [2024-11-27 06:30:43.591456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.150 [2024-11-27 06:30:43.591480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.150 [2024-11-27 06:30:43.591485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.150 [2024-11-27 06:30:43.602410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.150 [2024-11-27 06:30:43.602495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.150 [2024-11-27 06:30:43.602503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.150 [2024-11-27 06:30:43.610694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.150 [2024-11-27 06:30:43.610719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.150 [2024-11-27 06:30:43.610725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.150 [2024-11-27 06:30:43.621416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.150 [2024-11-27 06:30:43.621503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.150 [2024-11-27 06:30:43.621510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.150 [2024-11-27 06:30:43.630665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.150 [2024-11-27 06:30:43.630689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.150 [2024-11-27 06:30:43.630695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.150 [2024-11-27 06:30:43.641890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.150 [2024-11-27 06:30:43.641923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.150 [2024-11-27 06:30:43.641928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.150 [2024-11-27 06:30:43.651029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.150 [2024-11-27 06:30:43.651051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.150 [2024-11-27 06:30:43.651057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.150 [2024-11-27 06:30:43.661124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.150 [2024-11-27 06:30:43.661147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.150 [2024-11-27 06:30:43.661153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.150 [2024-11-27 06:30:43.671321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.150 [2024-11-27 06:30:43.671344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.150 [2024-11-27 06:30:43.671350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.423 [2024-11-27 06:30:43.681226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.423 [2024-11-27 06:30:43.681250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.423 [2024-11-27 06:30:43.681255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.423 [2024-11-27 06:30:43.690356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.423 [2024-11-27 06:30:43.690380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.423 [2024-11-27 06:30:43.690386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.423 [2024-11-27 06:30:43.700604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.423 [2024-11-27 06:30:43.700698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.423 [2024-11-27 06:30:43.700706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.423 [2024-11-27 06:30:43.710817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.423 [2024-11-27 06:30:43.710842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.423 [2024-11-27 06:30:43.710848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.423 [2024-11-27 06:30:43.721548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.423 [2024-11-27 06:30:43.721573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.423 [2024-11-27 06:30:43.721579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.423 [2024-11-27 06:30:43.731063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.424 [2024-11-27 06:30:43.731159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.424 [2024-11-27 06:30:43.731167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.424 [2024-11-27 06:30:43.741751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.424 [2024-11-27 06:30:43.741777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.424 [2024-11-27 06:30:43.741783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.424 [2024-11-27 06:30:43.752137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.424 [2024-11-27 06:30:43.752161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.424 [2024-11-27 06:30:43.752167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.424 [2024-11-27 06:30:43.762311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.424 [2024-11-27 06:30:43.762335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.424 [2024-11-27 06:30:43.762341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.424 [2024-11-27 06:30:43.773010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.424 [2024-11-27 06:30:43.773103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.424 [2024-11-27 06:30:43.773112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.424 [2024-11-27 06:30:43.783382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.424 [2024-11-27 06:30:43.783405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.424 [2024-11-27 06:30:43.783410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.424 [2024-11-27 06:30:43.792297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.425 [2024-11-27 06:30:43.792385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.425 [2024-11-27 06:30:43.792393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.425 [2024-11-27 06:30:43.801994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.425 [2024-11-27 06:30:43.802017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.425 [2024-11-27 06:30:43.802023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.425 [2024-11-27 06:30:43.811503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.425 [2024-11-27 06:30:43.811589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.425 [2024-11-27 06:30:43.811597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.425 [2024-11-27 06:30:43.821608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.425 [2024-11-27 06:30:43.821633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.425 [2024-11-27 06:30:43.821639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.425 [2024-11-27 06:30:43.832011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.425 [2024-11-27 06:30:43.832033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.425 [2024-11-27 06:30:43.832038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.425 [2024-11-27 06:30:43.840379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.426 [2024-11-27 06:30:43.840403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.426 [2024-11-27 06:30:43.840408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.426 [2024-11-27 06:30:43.852497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.426 [2024-11-27 06:30:43.852519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.426 [2024-11-27 06:30:43.852525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.426 25297.00 IOPS, 98.82 MiB/s [2024-11-27T06:30:43.956Z] [2024-11-27 06:30:43.863524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.426 [2024-11-27 06:30:43.863547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.426 [2024-11-27 06:30:43.863553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.427 [2024-11-27 06:30:43.873583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.427 [2024-11-27 06:30:43.873607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.427 [2024-11-27 06:30:43.873612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.427 [2024-11-27 06:30:43.883420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.427 [2024-11-27 06:30:43.883443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.427 [2024-11-27 06:30:43.883449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.427 [2024-11-27 06:30:43.891871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.428 [2024-11-27 06:30:43.891895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.428 [2024-11-27 06:30:43.891901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.428 [2024-11-27 06:30:43.902565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.428 [2024-11-27 06:30:43.902651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.428 [2024-11-27 06:30:43.902659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.428 [2024-11-27 06:30:43.912489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.428 [2024-11-27 06:30:43.912513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.428 [2024-11-27 06:30:43.912519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.428 [2024-11-27 06:30:43.922345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.428 [2024-11-27 06:30:43.922368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.428 [2024-11-27 06:30:43.922374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.428 [2024-11-27 06:30:43.932192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.428 [2024-11-27 06:30:43.932215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.428 [2024-11-27 06:30:43.932220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.428 [2024-11-27 06:30:43.942065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.428 [2024-11-27 06:30:43.942088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.428 [2024-11-27 06:30:43.942094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.696 [2024-11-27 06:30:43.952234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.696 [2024-11-27 06:30:43.952256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.696 [2024-11-27 06:30:43.952261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.696 [2024-11-27 06:30:43.962571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.696 [2024-11-27 06:30:43.962594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.696 [2024-11-27 06:30:43.962599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.696 [2024-11-27 06:30:43.971242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.696 [2024-11-27 06:30:43.971266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.696 [2024-11-27 06:30:43.971272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.696 [2024-11-27 06:30:43.981196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.696 [2024-11-27 06:30:43.981284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.696 [2024-11-27 06:30:43.981291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.696 [2024-11-27 06:30:43.991087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.696 [2024-11-27 06:30:43.991111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.696 [2024-11-27 06:30:43.991116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.696 [2024-11-27 06:30:44.002049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.696 [2024-11-27 06:30:44.002072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.696 [2024-11-27 06:30:44.002077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.696 [2024-11-27 06:30:44.011372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.696 [2024-11-27 06:30:44.011395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.696 [2024-11-27 06:30:44.011401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.696 [2024-11-27 06:30:44.021221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.696 [2024-11-27 06:30:44.021243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.696 [2024-11-27 06:30:44.021248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.696 [2024-11-27 06:30:44.031076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.696 [2024-11-27 06:30:44.031098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.696 [2024-11-27 06:30:44.031104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.696 [2024-11-27 06:30:44.040949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.696 [2024-11-27 06:30:44.040970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.697 [2024-11-27 06:30:44.040976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.697 [2024-11-27 06:30:44.050991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.697 [2024-11-27 06:30:44.051012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.697 [2024-11-27 06:30:44.051018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.697 [2024-11-27 06:30:44.062053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.697 [2024-11-27 06:30:44.062076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.697 [2024-11-27 06:30:44.062082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.697 [2024-11-27 06:30:44.071335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.697 [2024-11-27 06:30:44.071359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.697 [2024-11-27 06:30:44.071365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.697 [2024-11-27 06:30:44.080992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.697 [2024-11-27 06:30:44.081013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.697 [2024-11-27 06:30:44.081019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.697 [2024-11-27 06:30:44.091513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.697 [2024-11-27 06:30:44.091536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.697 [2024-11-27 06:30:44.091542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.697 [2024-11-27 06:30:44.101415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.697 [2024-11-27 06:30:44.101439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.697 [2024-11-27 06:30:44.101445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.697 [2024-11-27 06:30:44.111322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.697 [2024-11-27 06:30:44.111345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.697 [2024-11-27 06:30:44.111350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.697 [2024-11-27 06:30:44.121848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.697 [2024-11-27 06:30:44.121872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.697 [2024-11-27 06:30:44.121877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.697 [2024-11-27 06:30:44.132023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.697 [2024-11-27 06:30:44.132111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.697 [2024-11-27 06:30:44.132119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.697 [2024-11-27 06:30:44.141911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.697 [2024-11-27 06:30:44.141933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.697 [2024-11-27 06:30:44.141939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.697 [2024-11-27 06:30:44.151767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.697 [2024-11-27 06:30:44.151855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.697 [2024-11-27 06:30:44.151863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.697 [2024-11-27 06:30:44.162106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.697 [2024-11-27 06:30:44.162130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.697 [2024-11-27 06:30:44.162135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.697 [2024-11-27 06:30:44.172332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.697 [2024-11-27 06:30:44.172355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.697 [2024-11-27 06:30:44.172360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.697 [2024-11-27 06:30:44.182201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.697 [2024-11-27 06:30:44.182224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.697 [2024-11-27 06:30:44.182230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.697 [2024-11-27 06:30:44.192528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.697 [2024-11-27 06:30:44.192550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.697 [2024-11-27 06:30:44.192556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.697 [2024-11-27 06:30:44.202652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.697 [2024-11-27 06:30:44.202674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.697 [2024-11-27 06:30:44.202680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.697 [2024-11-27 06:30:44.211073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.697 [2024-11-27 06:30:44.211159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.697 [2024-11-27 06:30:44.211166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.697 [2024-11-27 06:30:44.221615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.697 [2024-11-27 06:30:44.221639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.697 [2024-11-27 06:30:44.221645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.958 [2024-11-27 06:30:44.232343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.958 [2024-11-27 06:30:44.232366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.958 [2024-11-27 06:30:44.232372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.958 [2024-11-27 06:30:44.242256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.958 [2024-11-27 06:30:44.242344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.958 [2024-11-27 06:30:44.242352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.958 [2024-11-27 06:30:44.253100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.958 [2024-11-27 06:30:44.253182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.958 [2024-11-27 06:30:44.253190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.958 [2024-11-27 06:30:44.263043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.958 [2024-11-27 06:30:44.263066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.958 [2024-11-27 06:30:44.263071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.958 [2024-11-27 06:30:44.272914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.958 [2024-11-27 06:30:44.272936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.958 [2024-11-27 06:30:44.272941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.958 [2024-11-27 06:30:44.282683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.958 [2024-11-27 06:30:44.282708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.958 [2024-11-27 06:30:44.282713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.958 [2024-11-27 06:30:44.292306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.958 [2024-11-27 06:30:44.292390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.958 [2024-11-27 06:30:44.292398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.958 [2024-11-27 06:30:44.302000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.958 [2024-11-27 06:30:44.302022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.958 [2024-11-27 06:30:44.302028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.958 [2024-11-27 06:30:44.310634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.958 [2024-11-27 06:30:44.310658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.959 [2024-11-27 06:30:44.310664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.959 [2024-11-27 06:30:44.321310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.959 [2024-11-27 06:30:44.321332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.959 [2024-11-27 06:30:44.321338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.959 [2024-11-27 06:30:44.331182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.959 [2024-11-27 06:30:44.331269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.959 [2024-11-27 06:30:44.331277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.959 [2024-11-27 06:30:44.341745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.959 [2024-11-27 06:30:44.341769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.959 [2024-11-27 06:30:44.341775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.959 [2024-11-27 06:30:44.351776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.959 [2024-11-27 06:30:44.351800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.959 [2024-11-27 06:30:44.351805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.959 [2024-11-27 06:30:44.361791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.959 [2024-11-27 06:30:44.361815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.959 [2024-11-27 06:30:44.361821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.959 [2024-11-27 06:30:44.371855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.959 [2024-11-27 06:30:44.371879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.959 [2024-11-27 06:30:44.371885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.959 [2024-11-27 06:30:44.380415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.959 [2024-11-27 06:30:44.380439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.959 [2024-11-27 06:30:44.380445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.959 [2024-11-27 06:30:44.391982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.959 [2024-11-27 06:30:44.392005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.959 [2024-11-27 06:30:44.392010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.959 [2024-11-27 06:30:44.401841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.959 [2024-11-27 06:30:44.401864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.959 [2024-11-27 06:30:44.401869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.959 [2024-11-27 06:30:44.411690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.959 [2024-11-27 06:30:44.411713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.959 [2024-11-27 06:30:44.411719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.959 [2024-11-27 06:30:44.421552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.959 [2024-11-27 06:30:44.421576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.959 [2024-11-27 06:30:44.421582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.959 [2024-11-27 06:30:44.431403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.959 [2024-11-27 06:30:44.431426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.959 [2024-11-27 06:30:44.431431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.959 [2024-11-27 06:30:44.441638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.959 [2024-11-27 06:30:44.441660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.959 [2024-11-27 06:30:44.441666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.959 [2024-11-27 06:30:44.451509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.959 [2024-11-27 06:30:44.451532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.959 [2024-11-27 06:30:44.451537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.959 [2024-11-27 06:30:44.461231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.959 [2024-11-27 06:30:44.461254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.959 [2024-11-27 06:30:44.461259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.959 [2024-11-27 06:30:44.469727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.959 [2024-11-27 06:30:44.469750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.959 [2024-11-27 06:30:44.469756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.959 [2024-11-27 06:30:44.480024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:24.959 [2024-11-27 06:30:44.480046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.959 [2024-11-27 06:30:44.480052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.220 [2024-11-27 06:30:44.489614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:25.220 [2024-11-27 06:30:44.489637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.220 [2024-11-27 06:30:44.489642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.220 [2024-11-27 06:30:44.499915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:25.220 [2024-11-27 06:30:44.499937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.220 [2024-11-27 06:30:44.499943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.220 [2024-11-27 06:30:44.510010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:25.220 [2024-11-27 06:30:44.510100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.221 [2024-11-27 06:30:44.510107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.221 [2024-11-27 06:30:44.520242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:25.221 [2024-11-27 06:30:44.520265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.221 [2024-11-27 06:30:44.520271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.221 [2024-11-27 06:30:44.530291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:25.221 [2024-11-27 06:30:44.530314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.221 [2024-11-27 06:30:44.530319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.221 [2024-11-27 06:30:44.540132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:25.221 [2024-11-27 06:30:44.540155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.221 [2024-11-27 06:30:44.540161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.221 [2024-11-27 06:30:44.550397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:25.221 [2024-11-27 06:30:44.550422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.221 [2024-11-27 06:30:44.550428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.221 [2024-11-27 06:30:44.561196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:25.221 [2024-11-27 06:30:44.561284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.221 [2024-11-27 06:30:44.561292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.221 [2024-11-27 06:30:44.571839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:25.221 [2024-11-27 06:30:44.571863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.221 [2024-11-27 06:30:44.571868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.221 [2024-11-27 06:30:44.582058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:25.221 [2024-11-27 06:30:44.582081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.221 [2024-11-27 06:30:44.582086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.221 [2024-11-27 06:30:44.592264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:25.221 [2024-11-27 06:30:44.592349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.221 [2024-11-27 06:30:44.592357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.221 [2024-11-27 06:30:44.602289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:25.221 [2024-11-27 06:30:44.602312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.221 [2024-11-27 06:30:44.602318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.221 [2024-11-27 06:30:44.612241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:25.221 [2024-11-27 06:30:44.612264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.221 [2024-11-27 06:30:44.612270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.221 [2024-11-27 06:30:44.622365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:25.221 [2024-11-27 06:30:44.622388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.221 [2024-11-27 06:30:44.622394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.221 [2024-11-27 06:30:44.631682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:25.221 [2024-11-27 06:30:44.631769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.221 [2024-11-27 06:30:44.631777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.221 [2024-11-27 06:30:44.641094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:25.221 [2024-11-27 06:30:44.641117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.221 [2024-11-27 06:30:44.641123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.221 [2024-11-27 06:30:44.650755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:25.221 [2024-11-27 06:30:44.650890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.221 [2024-11-27 06:30:44.650898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.221 [2024-11-27 06:30:44.659926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:25.221 [2024-11-27 06:30:44.659948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.221 [2024-11-27 06:30:44.659954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.221 [2024-11-27 06:30:44.669633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:25.221 [2024-11-27 06:30:44.669657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.221 [2024-11-27 06:30:44.669663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.221 [2024-11-27 06:30:44.679944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:25.221 [2024-11-27 06:30:44.679967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.221 [2024-11-27 06:30:44.679972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.221 [2024-11-27 06:30:44.689173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:25.221 [2024-11-27 06:30:44.689195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.221 [2024-11-27 06:30:44.689201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.221 [2024-11-27 06:30:44.699423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:25.221 [2024-11-27 06:30:44.699446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.221 [2024-11-27 06:30:44.699452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.221 [2024-11-27 06:30:44.709227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:25.221 [2024-11-27 06:30:44.709252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.221 [2024-11-27 06:30:44.709257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.221 [2024-11-27 06:30:44.719166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:25.221 [2024-11-27 06:30:44.719191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.221 [2024-11-27 06:30:44.719197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.221 [2024-11-27 06:30:44.729177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:25.221 [2024-11-27 06:30:44.729201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.221 [2024-11-27 06:30:44.729207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.221 [2024-11-27 06:30:44.739686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:25.221 [2024-11-27 06:30:44.739711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.221 [2024-11-27 06:30:44.739718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.484 [2024-11-27 06:30:44.749982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:25.484 [2024-11-27 06:30:44.750079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.484 [2024-11-27 06:30:44.750087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.484 [2024-11-27 06:30:44.760768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:25.484 [2024-11-27 06:30:44.760793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.484 [2024-11-27 06:30:44.760799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.484 [2024-11-27 06:30:44.769973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:25.484 [2024-11-27 06:30:44.769994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.484 [2024-11-27 06:30:44.769999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.484 [2024-11-27 06:30:44.779740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:25.484 [2024-11-27 06:30:44.779765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.484 [2024-11-27 06:30:44.779771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.484 [2024-11-27 06:30:44.790619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:25.484 [2024-11-27 06:30:44.790646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.484 [2024-11-27 06:30:44.790652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.484 [2024-11-27 06:30:44.800832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:25.484 [2024-11-27 06:30:44.800856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.484 [2024-11-27 06:30:44.800862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.484 [2024-11-27 06:30:44.810809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:25.484 [2024-11-27 06:30:44.810835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.484 [2024-11-27 06:30:44.810842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.484 [2024-11-27 06:30:44.821092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:25.484 [2024-11-27 06:30:44.821186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.484 [2024-11-27 06:30:44.821194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.484 [2024-11-27 06:30:44.831306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:25.484 [2024-11-27 06:30:44.831330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.484 [2024-11-27 06:30:44.831336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.484 [2024-11-27 06:30:44.841481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:25.484 [2024-11-27 06:30:44.841505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.484 [2024-11-27 06:30:44.841511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.484 [2024-11-27 06:30:44.851743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc365f0) 00:28:25.484 [2024-11-27 06:30:44.851770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.484 [2024-11-27 06:30:44.851775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.484 25376.50 IOPS, 99.13 MiB/s 00:28:25.484 Latency(us) 00:28:25.484 [2024-11-27T06:30:45.014Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:25.484 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:25.484 nvme0n1 : 2.00 25401.00 99.22 0.00 0.00 5034.12 2835.69 12855.14 00:28:25.484 [2024-11-27T06:30:45.014Z] =================================================================================================================== 00:28:25.484 [2024-11-27T06:30:45.014Z] Total : 25401.00 99.22 0.00 0.00 5034.12 2835.69 12855.14 00:28:25.484 { 00:28:25.484 "results": [ 00:28:25.484 { 00:28:25.484 "job": "nvme0n1", 00:28:25.484 "core_mask": "0x2", 00:28:25.484 "workload": "randread", 00:28:25.484 "status": "finished", 00:28:25.484 "queue_depth": 128, 00:28:25.484 "io_size": 4096, 00:28:25.484 "runtime": 2.00311, 00:28:25.484 "iops": 25401.001442756515, 00:28:25.484 "mibps": 99.22266188576764, 00:28:25.484 "io_failed": 0, 00:28:25.484 "io_timeout": 0, 00:28:25.484 "avg_latency_us": 5034.123100507519, 00:28:25.484 "min_latency_us": 2835.6923076923076, 00:28:25.484 "max_latency_us": 12855.138461538461 00:28:25.484 } 00:28:25.484 ], 00:28:25.484 "core_count": 1 00:28:25.484 } 00:28:25.484 06:30:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:25.484 06:30:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:25.484 06:30:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:25.484 | .driver_specific 00:28:25.484 | .nvme_error 00:28:25.484 | .status_code 00:28:25.484 | .command_transient_transport_error' 00:28:25.484 06:30:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:25.746 06:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 199 > 0 )) 00:28:25.746 06:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 113488 00:28:25.746 06:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 113488 ']' 00:28:25.746 06:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 113488 00:28:25.746 06:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:25.746 06:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:25.746 06:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 113488 00:28:25.746 killing process with pid 113488 00:28:25.746 Received shutdown signal, test time was about 2.000000 seconds 00:28:25.746 00:28:25.746 Latency(us) 00:28:25.746 [2024-11-27T06:30:45.276Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:25.746 [2024-11-27T06:30:45.276Z] =================================================================================================================== 00:28:25.746 [2024-11-27T06:30:45.276Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:25.746 06:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:25.746 06:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:25.746 06:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 113488' 00:28:25.746 06:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 113488 00:28:25.746 06:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 113488 00:28:25.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:25.746 06:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:25.746 06:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:25.746 06:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:25.746 06:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:25.746 06:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:25.746 06:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=113577 00:28:25.746 06:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 113577 /var/tmp/bperf.sock 00:28:25.746 06:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 113577 ']' 00:28:25.746 06:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:25.746 06:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:25.746 06:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:25.746 06:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:25.746 06:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:25.746 06:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:25.746 [2024-11-27 06:30:45.244673] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:28:25.746 [2024-11-27 06:30:45.244888] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-aI/O size of 131072 is greater than zero copy threshold (65536). 00:28:25.746 Zero copy mechanism will not be used. 00:28:25.746 llocations --file-prefix=spdk_pid113577 ] 00:28:26.007 [2024-11-27 06:30:45.358896] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:26.007 [2024-11-27 06:30:45.384875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.007 [2024-11-27 06:30:45.402225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.949 06:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:26.949 06:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:26.949 06:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:26.949 06:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:26.949 06:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:26.949 06:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.949 06:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:26.949 06:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.949 06:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:26.949 06:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:27.211 nvme0n1 00:28:27.211 06:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:27.211 06:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.211 06:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:27.211 06:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.211 06:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:27.211 06:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:27.211 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:27.211 Zero copy mechanism will not be used. 00:28:27.211 Running I/O for 2 seconds... 00:28:27.211 [2024-11-27 06:30:46.731983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.211 [2024-11-27 06:30:46.732021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.211 [2024-11-27 06:30:46.732029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.211 [2024-11-27 06:30:46.735352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.211 [2024-11-27 06:30:46.735461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.211 [2024-11-27 06:30:46.735515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.211 [2024-11-27 06:30:46.738486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.211 [2024-11-27 06:30:46.738596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.211 [2024-11-27 06:30:46.738648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.475 [2024-11-27 06:30:46.741571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.475 [2024-11-27 06:30:46.741680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.475 [2024-11-27 06:30:46.741731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.475 [2024-11-27 06:30:46.744678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.475 [2024-11-27 06:30:46.744850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.475 [2024-11-27 06:30:46.744915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.475 [2024-11-27 06:30:46.747946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.475 [2024-11-27 06:30:46.748037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.475 [2024-11-27 06:30:46.748079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.475 [2024-11-27 06:30:46.751029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.475 [2024-11-27 06:30:46.751056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.475 [2024-11-27 06:30:46.751062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.475 [2024-11-27 06:30:46.753912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.475 [2024-11-27 06:30:46.753936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.475 [2024-11-27 06:30:46.753941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.475 [2024-11-27 06:30:46.756765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.475 [2024-11-27 06:30:46.756789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.475 [2024-11-27 06:30:46.756794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.475 [2024-11-27 06:30:46.759715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.475 [2024-11-27 06:30:46.759741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.475 [2024-11-27 06:30:46.759747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.475 [2024-11-27 06:30:46.762604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.475 [2024-11-27 06:30:46.762630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.475 [2024-11-27 06:30:46.762636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.475 [2024-11-27 06:30:46.765516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.475 [2024-11-27 06:30:46.765608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.475 [2024-11-27 06:30:46.765616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.475 [2024-11-27 06:30:46.768543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.475 [2024-11-27 06:30:46.768569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.475 [2024-11-27 06:30:46.768575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.475 [2024-11-27 06:30:46.771511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.475 [2024-11-27 06:30:46.771536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.475 [2024-11-27 06:30:46.771542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.475 [2024-11-27 06:30:46.774449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.475 [2024-11-27 06:30:46.774475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.475 [2024-11-27 06:30:46.774481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.475 [2024-11-27 06:30:46.777422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.475 [2024-11-27 06:30:46.777447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.475 [2024-11-27 06:30:46.777453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.475 [2024-11-27 06:30:46.780313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.475 [2024-11-27 06:30:46.780338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.475 [2024-11-27 06:30:46.780344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.475 [2024-11-27 06:30:46.783125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.475 [2024-11-27 06:30:46.783151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.475 [2024-11-27 06:30:46.783157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.475 [2024-11-27 06:30:46.786077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.475 [2024-11-27 06:30:46.786103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.475 [2024-11-27 06:30:46.786108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.475 [2024-11-27 06:30:46.788872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.475 [2024-11-27 06:30:46.788897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.475 [2024-11-27 06:30:46.788915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.475 [2024-11-27 06:30:46.791659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.475 [2024-11-27 06:30:46.791684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.475 [2024-11-27 06:30:46.791690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.475 [2024-11-27 06:30:46.794489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.475 [2024-11-27 06:30:46.794515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.475 [2024-11-27 06:30:46.794520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.475 [2024-11-27 06:30:46.797360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.475 [2024-11-27 06:30:46.797455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.476 [2024-11-27 06:30:46.797463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.476 [2024-11-27 06:30:46.800265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.476 [2024-11-27 06:30:46.800290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.476 [2024-11-27 06:30:46.800296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.476 [2024-11-27 06:30:46.803088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.476 [2024-11-27 06:30:46.803113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.476 [2024-11-27 06:30:46.803118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.476 [2024-11-27 06:30:46.806051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.476 [2024-11-27 06:30:46.806076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.476 [2024-11-27 06:30:46.806081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.476 [2024-11-27 06:30:46.809147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.476 [2024-11-27 06:30:46.809173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.476 [2024-11-27 06:30:46.809179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.476 [2024-11-27 06:30:46.812077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.476 [2024-11-27 06:30:46.812101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.476 [2024-11-27 06:30:46.812106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.476 [2024-11-27 06:30:46.815014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.476 [2024-11-27 06:30:46.815040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.476 [2024-11-27 06:30:46.815045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.476 [2024-11-27 06:30:46.817958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.476 [2024-11-27 06:30:46.817983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.476 [2024-11-27 06:30:46.817988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.476 [2024-11-27 06:30:46.820871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.476 [2024-11-27 06:30:46.820979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.476 [2024-11-27 06:30:46.820987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.476 [2024-11-27 06:30:46.823915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.476 [2024-11-27 06:30:46.823939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.476 [2024-11-27 06:30:46.823944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.476 [2024-11-27 06:30:46.826877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.476 [2024-11-27 06:30:46.826911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.476 [2024-11-27 06:30:46.826917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.476 [2024-11-27 06:30:46.829770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.476 [2024-11-27 06:30:46.829797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.476 [2024-11-27 06:30:46.829802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.476 [2024-11-27 06:30:46.832770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.476 [2024-11-27 06:30:46.832860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.476 [2024-11-27 06:30:46.832868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.476 [2024-11-27 06:30:46.835804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.476 [2024-11-27 06:30:46.835831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.476 [2024-11-27 06:30:46.835836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.476 [2024-11-27 06:30:46.838900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.476 [2024-11-27 06:30:46.838935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.476 [2024-11-27 06:30:46.838940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.476 [2024-11-27 06:30:46.841629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.476 [2024-11-27 06:30:46.841654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.476 [2024-11-27 06:30:46.841660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.476 [2024-11-27 06:30:46.844493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.476 [2024-11-27 06:30:46.844582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.476 [2024-11-27 06:30:46.844589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.476 [2024-11-27 06:30:46.847480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.476 [2024-11-27 06:30:46.847505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.476 [2024-11-27 06:30:46.847511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.476 [2024-11-27 06:30:46.850453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.476 [2024-11-27 06:30:46.850479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.476 [2024-11-27 06:30:46.850484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.476 [2024-11-27 06:30:46.853362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.476 [2024-11-27 06:30:46.853387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.476 [2024-11-27 06:30:46.853392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.476 [2024-11-27 06:30:46.856257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.476 [2024-11-27 06:30:46.856283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.476 [2024-11-27 06:30:46.856288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.476 [2024-11-27 06:30:46.859308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.476 [2024-11-27 06:30:46.859335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.476 [2024-11-27 06:30:46.859342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.476 [2024-11-27 06:30:46.862248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.476 [2024-11-27 06:30:46.862273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.477 [2024-11-27 06:30:46.862279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.477 [2024-11-27 06:30:46.865123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.477 [2024-11-27 06:30:46.865149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.477 [2024-11-27 06:30:46.865154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.477 [2024-11-27 06:30:46.868060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.477 [2024-11-27 06:30:46.868085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.477 [2024-11-27 06:30:46.868090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.477 [2024-11-27 06:30:46.870967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.477 [2024-11-27 06:30:46.870991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.477 [2024-11-27 06:30:46.870996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.477 [2024-11-27 06:30:46.873755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.477 [2024-11-27 06:30:46.873780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.477 [2024-11-27 06:30:46.873785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.477 [2024-11-27 06:30:46.876725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.477 [2024-11-27 06:30:46.876817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.477 [2024-11-27 06:30:46.876824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.477 [2024-11-27 06:30:46.879725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.477 [2024-11-27 06:30:46.879751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.477 [2024-11-27 06:30:46.879756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.477 [2024-11-27 06:30:46.882618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.477 [2024-11-27 06:30:46.882644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.477 [2024-11-27 06:30:46.882650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.477 [2024-11-27 06:30:46.885546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.477 [2024-11-27 06:30:46.885573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.477 [2024-11-27 06:30:46.885578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.477 [2024-11-27 06:30:46.888448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.477 [2024-11-27 06:30:46.888473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.477 [2024-11-27 06:30:46.888478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.477 [2024-11-27 06:30:46.891344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.477 [2024-11-27 06:30:46.891368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.477 [2024-11-27 06:30:46.891374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.477 [2024-11-27 06:30:46.894277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.477 [2024-11-27 06:30:46.894303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.477 [2024-11-27 06:30:46.894308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.477 [2024-11-27 06:30:46.897177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.477 [2024-11-27 06:30:46.897202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.477 [2024-11-27 06:30:46.897208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.477 [2024-11-27 06:30:46.900001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.477 [2024-11-27 06:30:46.900025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.477 [2024-11-27 06:30:46.900030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.477 [2024-11-27 06:30:46.902967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.477 [2024-11-27 06:30:46.902991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.477 [2024-11-27 06:30:46.902996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.477 [2024-11-27 06:30:46.905710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.477 [2024-11-27 06:30:46.905735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.477 [2024-11-27 06:30:46.905741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.477 [2024-11-27 06:30:46.908622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.477 [2024-11-27 06:30:46.908716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.477 [2024-11-27 06:30:46.908734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.477 [2024-11-27 06:30:46.911617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.477 [2024-11-27 06:30:46.911643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.477 [2024-11-27 06:30:46.911648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.477 [2024-11-27 06:30:46.914561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.477 [2024-11-27 06:30:46.914586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.477 [2024-11-27 06:30:46.914592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.477 [2024-11-27 06:30:46.917468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.477 [2024-11-27 06:30:46.917493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.477 [2024-11-27 06:30:46.917499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.477 [2024-11-27 06:30:46.920415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.477 [2024-11-27 06:30:46.920440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.477 [2024-11-27 06:30:46.920446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.477 [2024-11-27 06:30:46.923304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.477 [2024-11-27 06:30:46.923330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.477 [2024-11-27 06:30:46.923336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.477 [2024-11-27 06:30:46.926258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.477 [2024-11-27 06:30:46.926284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.477 [2024-11-27 06:30:46.926289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.477 [2024-11-27 06:30:46.929287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.477 [2024-11-27 06:30:46.929312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.477 [2024-11-27 06:30:46.929317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.477 [2024-11-27 06:30:46.932207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.477 [2024-11-27 06:30:46.932232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.478 [2024-11-27 06:30:46.932237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.478 [2024-11-27 06:30:46.935147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.478 [2024-11-27 06:30:46.935172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.478 [2024-11-27 06:30:46.935177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.478 [2024-11-27 06:30:46.938047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.478 [2024-11-27 06:30:46.938072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.478 [2024-11-27 06:30:46.938077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.478 [2024-11-27 06:30:46.940987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.478 [2024-11-27 06:30:46.941011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.478 [2024-11-27 06:30:46.941016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.478 [2024-11-27 06:30:46.943869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.478 [2024-11-27 06:30:46.943895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.478 [2024-11-27 06:30:46.943901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.478 [2024-11-27 06:30:46.946781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.478 [2024-11-27 06:30:46.946808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.478 [2024-11-27 06:30:46.946813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.478 [2024-11-27 06:30:46.949699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.478 [2024-11-27 06:30:46.949724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.478 [2024-11-27 06:30:46.949730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.478 [2024-11-27 06:30:46.952653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.478 [2024-11-27 06:30:46.952751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.478 [2024-11-27 06:30:46.952759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.478 [2024-11-27 06:30:46.955720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.478 [2024-11-27 06:30:46.955745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.478 [2024-11-27 06:30:46.955751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.478 [2024-11-27 06:30:46.958563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.478 [2024-11-27 06:30:46.958588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.478 [2024-11-27 06:30:46.958593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.478 [2024-11-27 06:30:46.961521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.478 [2024-11-27 06:30:46.961547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.478 [2024-11-27 06:30:46.961552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.478 [2024-11-27 06:30:46.964473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.478 [2024-11-27 06:30:46.964498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.478 [2024-11-27 06:30:46.964503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.478 [2024-11-27 06:30:46.967511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.478 [2024-11-27 06:30:46.967537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.478 [2024-11-27 06:30:46.967544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.478 [2024-11-27 06:30:46.970484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.478 [2024-11-27 06:30:46.970510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.478 [2024-11-27 06:30:46.970516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.478 [2024-11-27 06:30:46.973409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.478 [2024-11-27 06:30:46.973434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.478 [2024-11-27 06:30:46.973439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.478 [2024-11-27 06:30:46.976343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.478 [2024-11-27 06:30:46.976368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.478 [2024-11-27 06:30:46.976374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.478 [2024-11-27 06:30:46.979270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.478 [2024-11-27 06:30:46.979296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.478 [2024-11-27 06:30:46.979301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.478 [2024-11-27 06:30:46.982012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.478 [2024-11-27 06:30:46.982036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.478 [2024-11-27 06:30:46.982042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.478 [2024-11-27 06:30:46.985037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.478 [2024-11-27 06:30:46.985063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.478 [2024-11-27 06:30:46.985069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.478 [2024-11-27 06:30:46.987960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.478 [2024-11-27 06:30:46.987983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.478 [2024-11-27 06:30:46.987988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.478 [2024-11-27 06:30:46.990863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.478 [2024-11-27 06:30:46.990888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.478 [2024-11-27 06:30:46.990893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.478 [2024-11-27 06:30:46.993805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.478 [2024-11-27 06:30:46.993831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.478 [2024-11-27 06:30:46.993837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.478 [2024-11-27 06:30:46.996760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.478 [2024-11-27 06:30:46.996852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.478 [2024-11-27 06:30:46.996859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.478 [2024-11-27 06:30:46.999795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.478 [2024-11-27 06:30:46.999821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.478 [2024-11-27 06:30:46.999826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.742 [2024-11-27 06:30:47.002618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.742 [2024-11-27 06:30:47.002643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.742 [2024-11-27 06:30:47.002649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.742 [2024-11-27 06:30:47.005572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.742 [2024-11-27 06:30:47.005598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.742 [2024-11-27 06:30:47.005603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.742 [2024-11-27 06:30:47.008498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.742 [2024-11-27 06:30:47.008523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.742 [2024-11-27 06:30:47.008529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.742 [2024-11-27 06:30:47.011448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.742 [2024-11-27 06:30:47.011473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.742 [2024-11-27 06:30:47.011478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.742 [2024-11-27 06:30:47.014352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.742 [2024-11-27 06:30:47.014378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.742 [2024-11-27 06:30:47.014384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.742 [2024-11-27 06:30:47.017160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.742 [2024-11-27 06:30:47.017185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.742 [2024-11-27 06:30:47.017190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.742 [2024-11-27 06:30:47.020077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.742 [2024-11-27 06:30:47.020102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.742 [2024-11-27 06:30:47.020107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.742 [2024-11-27 06:30:47.023003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.742 [2024-11-27 06:30:47.023026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.742 [2024-11-27 06:30:47.023031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.742 [2024-11-27 06:30:47.025987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.742 [2024-11-27 06:30:47.026012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.742 [2024-11-27 06:30:47.026017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.742 [2024-11-27 06:30:47.028948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.742 [2024-11-27 06:30:47.028971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.742 [2024-11-27 06:30:47.028977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.742 [2024-11-27 06:30:47.031816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.743 [2024-11-27 06:30:47.031841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.743 [2024-11-27 06:30:47.031847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.743 [2024-11-27 06:30:47.034683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.743 [2024-11-27 06:30:47.034709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.743 [2024-11-27 06:30:47.034714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.743 [2024-11-27 06:30:47.037587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.743 [2024-11-27 06:30:47.037613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.743 [2024-11-27 06:30:47.037619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.743 [2024-11-27 06:30:47.040534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.743 [2024-11-27 06:30:47.040559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.743 [2024-11-27 06:30:47.040564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.743 [2024-11-27 06:30:47.043490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.743 [2024-11-27 06:30:47.043516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.743 [2024-11-27 06:30:47.043522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.743 [2024-11-27 06:30:47.046401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.743 [2024-11-27 06:30:47.046428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.743 [2024-11-27 06:30:47.046434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.743 [2024-11-27 06:30:47.049340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.743 [2024-11-27 06:30:47.049365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.743 [2024-11-27 06:30:47.049371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.743 [2024-11-27 06:30:47.052348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.743 [2024-11-27 06:30:47.052374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.743 [2024-11-27 06:30:47.052381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.743 [2024-11-27 06:30:47.055257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.743 [2024-11-27 06:30:47.055282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.743 [2024-11-27 06:30:47.055288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.743 [2024-11-27 06:30:47.058177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.743 [2024-11-27 06:30:47.058202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.743 [2024-11-27 06:30:47.058208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.743 [2024-11-27 06:30:47.060940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.743 [2024-11-27 06:30:47.060964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.743 [2024-11-27 06:30:47.060969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.743 [2024-11-27 06:30:47.063780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.743 [2024-11-27 06:30:47.063805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.743 [2024-11-27 06:30:47.063811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.743 [2024-11-27 06:30:47.066733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.743 [2024-11-27 06:30:47.066759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.743 [2024-11-27 06:30:47.066765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.743 [2024-11-27 06:30:47.069696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.743 [2024-11-27 06:30:47.069722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.743 [2024-11-27 06:30:47.069728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.743 [2024-11-27 06:30:47.072789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.743 [2024-11-27 06:30:47.072890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.743 [2024-11-27 06:30:47.072899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.743 [2024-11-27 06:30:47.075994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.743 [2024-11-27 06:30:47.076019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.743 [2024-11-27 06:30:47.076026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.743 [2024-11-27 06:30:47.078952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.743 [2024-11-27 06:30:47.078976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.743 [2024-11-27 06:30:47.078981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.743 [2024-11-27 06:30:47.081878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.743 [2024-11-27 06:30:47.081917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.743 [2024-11-27 06:30:47.081923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.743 [2024-11-27 06:30:47.084809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.743 [2024-11-27 06:30:47.084916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.743 [2024-11-27 06:30:47.084924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.743 [2024-11-27 06:30:47.087824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.743 [2024-11-27 06:30:47.087850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.743 [2024-11-27 06:30:47.087856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.743 [2024-11-27 06:30:47.090660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.743 [2024-11-27 06:30:47.090686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.743 [2024-11-27 06:30:47.090691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.743 [2024-11-27 06:30:47.093598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.743 [2024-11-27 06:30:47.093624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.743 [2024-11-27 06:30:47.093629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.743 [2024-11-27 06:30:47.096541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.743 [2024-11-27 06:30:47.096635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.743 [2024-11-27 06:30:47.096643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.743 [2024-11-27 06:30:47.099585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.743 [2024-11-27 06:30:47.099612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.743 [2024-11-27 06:30:47.099618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.743 [2024-11-27 06:30:47.102540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.743 [2024-11-27 06:30:47.102567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.743 [2024-11-27 06:30:47.102572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.743 [2024-11-27 06:30:47.105524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.743 [2024-11-27 06:30:47.105550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.743 [2024-11-27 06:30:47.105556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.743 [2024-11-27 06:30:47.108445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.743 [2024-11-27 06:30:47.108470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.743 [2024-11-27 06:30:47.108476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.743 [2024-11-27 06:30:47.111347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.744 [2024-11-27 06:30:47.111373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.744 [2024-11-27 06:30:47.111378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.744 [2024-11-27 06:30:47.114267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.744 [2024-11-27 06:30:47.114292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.744 [2024-11-27 06:30:47.114297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.744 [2024-11-27 06:30:47.117216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.744 [2024-11-27 06:30:47.117241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.744 [2024-11-27 06:30:47.117247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.744 [2024-11-27 06:30:47.120084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.744 [2024-11-27 06:30:47.120108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.744 [2024-11-27 06:30:47.120114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.744 [2024-11-27 06:30:47.123007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.744 [2024-11-27 06:30:47.123032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.744 [2024-11-27 06:30:47.123037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.744 [2024-11-27 06:30:47.125899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.744 [2024-11-27 06:30:47.125932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.744 [2024-11-27 06:30:47.125938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.744 [2024-11-27 06:30:47.128838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.744 [2024-11-27 06:30:47.128944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.744 [2024-11-27 06:30:47.128951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.744 [2024-11-27 06:30:47.131834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.744 [2024-11-27 06:30:47.131859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.744 [2024-11-27 06:30:47.131865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.744 [2024-11-27 06:30:47.134855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.744 [2024-11-27 06:30:47.134881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.744 [2024-11-27 06:30:47.134886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.744 [2024-11-27 06:30:47.137752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.744 [2024-11-27 06:30:47.137778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.744 [2024-11-27 06:30:47.137784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.744 [2024-11-27 06:30:47.140437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.744 [2024-11-27 06:30:47.140527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.744 [2024-11-27 06:30:47.140534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.744 [2024-11-27 06:30:47.143423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.744 [2024-11-27 06:30:47.143450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.744 [2024-11-27 06:30:47.143456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.744 [2024-11-27 06:30:47.146343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.744 [2024-11-27 06:30:47.146368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.744 [2024-11-27 06:30:47.146374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.744 [2024-11-27 06:30:47.149227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.744 [2024-11-27 06:30:47.149252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.744 [2024-11-27 06:30:47.149258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.744 [2024-11-27 06:30:47.152174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.744 [2024-11-27 06:30:47.152199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.744 [2024-11-27 06:30:47.152204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.744 [2024-11-27 06:30:47.155033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.744 [2024-11-27 06:30:47.155057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.744 [2024-11-27 06:30:47.155063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.744 [2024-11-27 06:30:47.157970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.744 [2024-11-27 06:30:47.157994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.744 [2024-11-27 06:30:47.158000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.744 [2024-11-27 06:30:47.160991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.744 [2024-11-27 06:30:47.161016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.744 [2024-11-27 06:30:47.161022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.744 [2024-11-27 06:30:47.163897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.744 [2024-11-27 06:30:47.163928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.744 [2024-11-27 06:30:47.163934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.744 [2024-11-27 06:30:47.166828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.744 [2024-11-27 06:30:47.166855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.744 [2024-11-27 06:30:47.166860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.744 [2024-11-27 06:30:47.169911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.744 [2024-11-27 06:30:47.169936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.744 [2024-11-27 06:30:47.169941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.744 [2024-11-27 06:30:47.172731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.744 [2024-11-27 06:30:47.172820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.744 [2024-11-27 06:30:47.172827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.744 [2024-11-27 06:30:47.175805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.744 [2024-11-27 06:30:47.175831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.744 [2024-11-27 06:30:47.175836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.744 [2024-11-27 06:30:47.178737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.744 [2024-11-27 06:30:47.178764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.744 [2024-11-27 06:30:47.178769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.744 [2024-11-27 06:30:47.181458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.744 [2024-11-27 06:30:47.181483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.744 [2024-11-27 06:30:47.181489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.744 [2024-11-27 06:30:47.184311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.744 [2024-11-27 06:30:47.184336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.744 [2024-11-27 06:30:47.184342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.744 [2024-11-27 06:30:47.187247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.745 [2024-11-27 06:30:47.187272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.745 [2024-11-27 06:30:47.187277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.745 [2024-11-27 06:30:47.190013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.745 [2024-11-27 06:30:47.190038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.745 [2024-11-27 06:30:47.190043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.745 [2024-11-27 06:30:47.192931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.745 [2024-11-27 06:30:47.192954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.745 [2024-11-27 06:30:47.192959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.745 [2024-11-27 06:30:47.195864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.745 [2024-11-27 06:30:47.195890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.745 [2024-11-27 06:30:47.195895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.745 [2024-11-27 06:30:47.198822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.745 [2024-11-27 06:30:47.198848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.745 [2024-11-27 06:30:47.198853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.745 [2024-11-27 06:30:47.201847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.745 [2024-11-27 06:30:47.201872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.745 [2024-11-27 06:30:47.201878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.745 [2024-11-27 06:30:47.204816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.745 [2024-11-27 06:30:47.204920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.745 [2024-11-27 06:30:47.204927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.745 [2024-11-27 06:30:47.207932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.745 [2024-11-27 06:30:47.207956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.745 [2024-11-27 06:30:47.207961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.745 [2024-11-27 06:30:47.210955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.745 [2024-11-27 06:30:47.210979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.745 [2024-11-27 06:30:47.210985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.745 [2024-11-27 06:30:47.213941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.745 [2024-11-27 06:30:47.213965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.745 [2024-11-27 06:30:47.213970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.745 [2024-11-27 06:30:47.216782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.745 [2024-11-27 06:30:47.216872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.745 [2024-11-27 06:30:47.216880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.745 [2024-11-27 06:30:47.219565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.745 [2024-11-27 06:30:47.219590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.745 [2024-11-27 06:30:47.219595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.745 [2024-11-27 06:30:47.222171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.745 [2024-11-27 06:30:47.222196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.745 [2024-11-27 06:30:47.222202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.745 [2024-11-27 06:30:47.224989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.745 [2024-11-27 06:30:47.225014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.745 [2024-11-27 06:30:47.225019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.745 [2024-11-27 06:30:47.227926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.745 [2024-11-27 06:30:47.227949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.745 [2024-11-27 06:30:47.227954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.745 [2024-11-27 06:30:47.230958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.745 [2024-11-27 06:30:47.230983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.745 [2024-11-27 06:30:47.230988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.745 [2024-11-27 06:30:47.233876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.745 [2024-11-27 06:30:47.233901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.745 [2024-11-27 06:30:47.233922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.745 [2024-11-27 06:30:47.236843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.745 [2024-11-27 06:30:47.236941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.745 [2024-11-27 06:30:47.236949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.745 [2024-11-27 06:30:47.239851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.745 [2024-11-27 06:30:47.239873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.745 [2024-11-27 06:30:47.239878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.745 [2024-11-27 06:30:47.242749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.745 [2024-11-27 06:30:47.242776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.745 [2024-11-27 06:30:47.242781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.745 [2024-11-27 06:30:47.245661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.745 [2024-11-27 06:30:47.245687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.745 [2024-11-27 06:30:47.245692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.745 [2024-11-27 06:30:47.248673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.745 [2024-11-27 06:30:47.248770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.745 [2024-11-27 06:30:47.248778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.745 [2024-11-27 06:30:47.251733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.745 [2024-11-27 06:30:47.251759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.745 [2024-11-27 06:30:47.251766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.745 [2024-11-27 06:30:47.254668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.745 [2024-11-27 06:30:47.254694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.745 [2024-11-27 06:30:47.254700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.745 [2024-11-27 06:30:47.257622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.745 [2024-11-27 06:30:47.257647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.745 [2024-11-27 06:30:47.257653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.745 [2024-11-27 06:30:47.260561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.745 [2024-11-27 06:30:47.260650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.745 [2024-11-27 06:30:47.260658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.745 [2024-11-27 06:30:47.263546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.745 [2024-11-27 06:30:47.263569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.746 [2024-11-27 06:30:47.263574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.746 [2024-11-27 06:30:47.266543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:27.746 [2024-11-27 06:30:47.266570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.746 [2024-11-27 06:30:47.266577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.009 [2024-11-27 06:30:47.269506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.009 [2024-11-27 06:30:47.269532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.009 [2024-11-27 06:30:47.269537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.009 [2024-11-27 06:30:47.272439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.009 [2024-11-27 06:30:47.272463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.009 [2024-11-27 06:30:47.272469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.009 [2024-11-27 06:30:47.275300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.010 [2024-11-27 06:30:47.275326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.010 [2024-11-27 06:30:47.275332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.010 [2024-11-27 06:30:47.278179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.010 [2024-11-27 06:30:47.278204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.010 [2024-11-27 06:30:47.278210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.010 [2024-11-27 06:30:47.281139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.010 [2024-11-27 06:30:47.281165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.010 [2024-11-27 06:30:47.281170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.010 [2024-11-27 06:30:47.283952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.010 [2024-11-27 06:30:47.283975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.010 [2024-11-27 06:30:47.283980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.010 [2024-11-27 06:30:47.286871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.010 [2024-11-27 06:30:47.286897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.010 [2024-11-27 06:30:47.286916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.010 [2024-11-27 06:30:47.289783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.010 [2024-11-27 06:30:47.289809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.010 [2024-11-27 06:30:47.289815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.010 [2024-11-27 06:30:47.292660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.010 [2024-11-27 06:30:47.292758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.010 [2024-11-27 06:30:47.292765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.010 [2024-11-27 06:30:47.295663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.010 [2024-11-27 06:30:47.295689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.010 [2024-11-27 06:30:47.295694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.010 [2024-11-27 06:30:47.298594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.010 [2024-11-27 06:30:47.298620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.010 [2024-11-27 06:30:47.298626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.010 [2024-11-27 06:30:47.301636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.010 [2024-11-27 06:30:47.301663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.010 [2024-11-27 06:30:47.301669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.010 [2024-11-27 06:30:47.304374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.010 [2024-11-27 06:30:47.304464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.010 [2024-11-27 06:30:47.304471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.010 [2024-11-27 06:30:47.307175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.010 [2024-11-27 06:30:47.307197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.010 [2024-11-27 06:30:47.307202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.010 [2024-11-27 06:30:47.310096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.010 [2024-11-27 06:30:47.310123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.010 [2024-11-27 06:30:47.310128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.010 [2024-11-27 06:30:47.312988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.010 [2024-11-27 06:30:47.313013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.010 [2024-11-27 06:30:47.313018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.010 [2024-11-27 06:30:47.315891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.010 [2024-11-27 06:30:47.315992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.010 [2024-11-27 06:30:47.315999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.010 [2024-11-27 06:30:47.318928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.010 [2024-11-27 06:30:47.318952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.010 [2024-11-27 06:30:47.318958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.010 [2024-11-27 06:30:47.321666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.010 [2024-11-27 06:30:47.321692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.010 [2024-11-27 06:30:47.321697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.010 [2024-11-27 06:30:47.324627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.010 [2024-11-27 06:30:47.324652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.010 [2024-11-27 06:30:47.324658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.010 [2024-11-27 06:30:47.327581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.010 [2024-11-27 06:30:47.327669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.010 [2024-11-27 06:30:47.327676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.010 [2024-11-27 06:30:47.330545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.010 [2024-11-27 06:30:47.330571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.010 [2024-11-27 06:30:47.330577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.010 [2024-11-27 06:30:47.333491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.010 [2024-11-27 06:30:47.333516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.010 [2024-11-27 06:30:47.333522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.010 [2024-11-27 06:30:47.336420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.010 [2024-11-27 06:30:47.336446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.010 [2024-11-27 06:30:47.336451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.010 [2024-11-27 06:30:47.339392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.010 [2024-11-27 06:30:47.339417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.010 [2024-11-27 06:30:47.339423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.010 [2024-11-27 06:30:47.342273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.010 [2024-11-27 06:30:47.342299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.010 [2024-11-27 06:30:47.342304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.010 [2024-11-27 06:30:47.345227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.010 [2024-11-27 06:30:47.345252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.010 [2024-11-27 06:30:47.345258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.010 [2024-11-27 06:30:47.347305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.010 [2024-11-27 06:30:47.347329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.010 [2024-11-27 06:30:47.347335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.010 [2024-11-27 06:30:47.349704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.010 [2024-11-27 06:30:47.349730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.010 [2024-11-27 06:30:47.349735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.011 [2024-11-27 06:30:47.352398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.011 [2024-11-27 06:30:47.352423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.011 [2024-11-27 06:30:47.352429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.011 [2024-11-27 06:30:47.354642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.011 [2024-11-27 06:30:47.354668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.011 [2024-11-27 06:30:47.354674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.011 [2024-11-27 06:30:47.357042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.011 [2024-11-27 06:30:47.357067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.011 [2024-11-27 06:30:47.357073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.011 [2024-11-27 06:30:47.359156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.011 [2024-11-27 06:30:47.359181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.011 [2024-11-27 06:30:47.359187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.011 [2024-11-27 06:30:47.361112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.011 [2024-11-27 06:30:47.361137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.011 [2024-11-27 06:30:47.361143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.011 [2024-11-27 06:30:47.363537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.011 [2024-11-27 06:30:47.363561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.011 [2024-11-27 06:30:47.363567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.011 [2024-11-27 06:30:47.365638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.011 [2024-11-27 06:30:47.365663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.011 [2024-11-27 06:30:47.365669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.011 [2024-11-27 06:30:47.367787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.011 [2024-11-27 06:30:47.367812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.011 [2024-11-27 06:30:47.367817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.011 [2024-11-27 06:30:47.370004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.011 [2024-11-27 06:30:47.370030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.011 [2024-11-27 06:30:47.370035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.011 [2024-11-27 06:30:47.371935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.011 [2024-11-27 06:30:47.371957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.011 [2024-11-27 06:30:47.371962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.011 [2024-11-27 06:30:47.374162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.011 [2024-11-27 06:30:47.374187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.011 [2024-11-27 06:30:47.374193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.011 [2024-11-27 06:30:47.376109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.011 [2024-11-27 06:30:47.376134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.011 [2024-11-27 06:30:47.376139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.011 [2024-11-27 06:30:47.378516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.011 [2024-11-27 06:30:47.378541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.011 [2024-11-27 06:30:47.378547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.011 [2024-11-27 06:30:47.380634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.011 [2024-11-27 06:30:47.380658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.011 [2024-11-27 06:30:47.380664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.011 [2024-11-27 06:30:47.382785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.011 [2024-11-27 06:30:47.382810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.011 [2024-11-27 06:30:47.382816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.011 [2024-11-27 06:30:47.385252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.011 [2024-11-27 06:30:47.385277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.011 [2024-11-27 06:30:47.385283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.011 [2024-11-27 06:30:47.387327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.011 [2024-11-27 06:30:47.387351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.011 [2024-11-27 06:30:47.387357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.011 [2024-11-27 06:30:47.389616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.011 [2024-11-27 06:30:47.389641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.011 [2024-11-27 06:30:47.389647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.011 [2024-11-27 06:30:47.391848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.011 [2024-11-27 06:30:47.391873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.011 [2024-11-27 06:30:47.391879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.011 [2024-11-27 06:30:47.394084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.011 [2024-11-27 06:30:47.394185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.011 [2024-11-27 06:30:47.394192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.011 [2024-11-27 06:30:47.396135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.011 [2024-11-27 06:30:47.396160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.011 [2024-11-27 06:30:47.396166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.011 [2024-11-27 06:30:47.398664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.011 [2024-11-27 06:30:47.398691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.011 [2024-11-27 06:30:47.398697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.011 [2024-11-27 06:30:47.401210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.011 [2024-11-27 06:30:47.401235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.011 [2024-11-27 06:30:47.401241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.011 [2024-11-27 06:30:47.402853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.011 [2024-11-27 06:30:47.402878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.011 [2024-11-27 06:30:47.402884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.011 [2024-11-27 06:30:47.405225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.011 [2024-11-27 06:30:47.405250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.011 [2024-11-27 06:30:47.405256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.011 [2024-11-27 06:30:47.408151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.011 [2024-11-27 06:30:47.408177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.011 [2024-11-27 06:30:47.408182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.011 [2024-11-27 06:30:47.411349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.011 [2024-11-27 06:30:47.411375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.011 [2024-11-27 06:30:47.411381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.012 [2024-11-27 06:30:47.413543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.012 [2024-11-27 06:30:47.413642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.012 [2024-11-27 06:30:47.413650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.012 [2024-11-27 06:30:47.416208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.012 [2024-11-27 06:30:47.416231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.012 [2024-11-27 06:30:47.416237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.012 [2024-11-27 06:30:47.418844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.012 [2024-11-27 06:30:47.418871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.012 [2024-11-27 06:30:47.418877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.012 [2024-11-27 06:30:47.420829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.012 [2024-11-27 06:30:47.420854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.012 [2024-11-27 06:30:47.420859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.012 [2024-11-27 06:30:47.423630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.012 [2024-11-27 06:30:47.423656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.012 [2024-11-27 06:30:47.423662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.012 [2024-11-27 06:30:47.425653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.012 [2024-11-27 06:30:47.425749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.012 [2024-11-27 06:30:47.425757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.012 [2024-11-27 06:30:47.428308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.012 [2024-11-27 06:30:47.428334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.012 [2024-11-27 06:30:47.428340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.012 [2024-11-27 06:30:47.430997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.012 [2024-11-27 06:30:47.431023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.012 [2024-11-27 06:30:47.431029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.012 [2024-11-27 06:30:47.432957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.012 [2024-11-27 06:30:47.432981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.012 [2024-11-27 06:30:47.432987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.012 [2024-11-27 06:30:47.435735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.012 [2024-11-27 06:30:47.435761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.012 [2024-11-27 06:30:47.435767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.012 [2024-11-27 06:30:47.437817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.012 [2024-11-27 06:30:47.437923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.012 [2024-11-27 06:30:47.437930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.012 [2024-11-27 06:30:47.440378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.012 [2024-11-27 06:30:47.440400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.012 [2024-11-27 06:30:47.440405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.012 [2024-11-27 06:30:47.443315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.012 [2024-11-27 06:30:47.443341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.012 [2024-11-27 06:30:47.443346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.012 [2024-11-27 06:30:47.445474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.012 [2024-11-27 06:30:47.445500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.012 [2024-11-27 06:30:47.445506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.012 [2024-11-27 06:30:47.448035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.012 [2024-11-27 06:30:47.448058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.012 [2024-11-27 06:30:47.448064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.012 [2024-11-27 06:30:47.450876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.012 [2024-11-27 06:30:47.450916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.012 [2024-11-27 06:30:47.450922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.012 [2024-11-27 06:30:47.453698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.012 [2024-11-27 06:30:47.453792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.012 [2024-11-27 06:30:47.453799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.012 [2024-11-27 06:30:47.456002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.012 [2024-11-27 06:30:47.456026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.012 [2024-11-27 06:30:47.456032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.012 [2024-11-27 06:30:47.458461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.012 [2024-11-27 06:30:47.458487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.012 [2024-11-27 06:30:47.458493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.012 [2024-11-27 06:30:47.461027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.012 [2024-11-27 06:30:47.461052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.012 [2024-11-27 06:30:47.461058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.012 [2024-11-27 06:30:47.462869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.012 [2024-11-27 06:30:47.462894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.012 [2024-11-27 06:30:47.462900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.012 [2024-11-27 06:30:47.465385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.012 [2024-11-27 06:30:47.465410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.012 [2024-11-27 06:30:47.465416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.012 [2024-11-27 06:30:47.467483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.012 [2024-11-27 06:30:47.467508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.012 [2024-11-27 06:30:47.467514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.012 [2024-11-27 06:30:47.469746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.012 [2024-11-27 06:30:47.469840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.012 [2024-11-27 06:30:47.469847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.012 [2024-11-27 06:30:47.472430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.012 [2024-11-27 06:30:47.472456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.012 [2024-11-27 06:30:47.472461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.012 [2024-11-27 06:30:47.475386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.012 [2024-11-27 06:30:47.475412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.012 [2024-11-27 06:30:47.475418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.012 [2024-11-27 06:30:47.478391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.012 [2024-11-27 06:30:47.478416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.012 [2024-11-27 06:30:47.478421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.013 [2024-11-27 06:30:47.481392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.013 [2024-11-27 06:30:47.481417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.013 [2024-11-27 06:30:47.481423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.013 [2024-11-27 06:30:47.484415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.013 [2024-11-27 06:30:47.484441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.013 [2024-11-27 06:30:47.484447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.013 [2024-11-27 06:30:47.487490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.013 [2024-11-27 06:30:47.487517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.013 [2024-11-27 06:30:47.487523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.013 [2024-11-27 06:30:47.490316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.013 [2024-11-27 06:30:47.490341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.013 [2024-11-27 06:30:47.490347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.013 [2024-11-27 06:30:47.493400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.013 [2024-11-27 06:30:47.493493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.013 [2024-11-27 06:30:47.493501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.013 [2024-11-27 06:30:47.496431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.013 [2024-11-27 06:30:47.496454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.013 [2024-11-27 06:30:47.496459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.013 [2024-11-27 06:30:47.499386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.013 [2024-11-27 06:30:47.499413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.013 [2024-11-27 06:30:47.499418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.013 [2024-11-27 06:30:47.502324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.013 [2024-11-27 06:30:47.502351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.013 [2024-11-27 06:30:47.502357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.013 [2024-11-27 06:30:47.505281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.013 [2024-11-27 06:30:47.505307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.013 [2024-11-27 06:30:47.505312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.013 [2024-11-27 06:30:47.508188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.013 [2024-11-27 06:30:47.508213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.013 [2024-11-27 06:30:47.508218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.013 [2024-11-27 06:30:47.511161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.013 [2024-11-27 06:30:47.511187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.013 [2024-11-27 06:30:47.511192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.013 [2024-11-27 06:30:47.514018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.013 [2024-11-27 06:30:47.514043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.013 [2024-11-27 06:30:47.514049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.013 [2024-11-27 06:30:47.516896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.013 [2024-11-27 06:30:47.516929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.013 [2024-11-27 06:30:47.516935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.013 [2024-11-27 06:30:47.519831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.013 [2024-11-27 06:30:47.519856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.013 [2024-11-27 06:30:47.519862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.013 [2024-11-27 06:30:47.522785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.013 [2024-11-27 06:30:47.522810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.013 [2024-11-27 06:30:47.522816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.013 [2024-11-27 06:30:47.525778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.013 [2024-11-27 06:30:47.525872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.013 [2024-11-27 06:30:47.525880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.013 [2024-11-27 06:30:47.528885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.013 [2024-11-27 06:30:47.528918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.013 [2024-11-27 06:30:47.528924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.013 [2024-11-27 06:30:47.531801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.013 [2024-11-27 06:30:47.531826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.013 [2024-11-27 06:30:47.531831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.013 [2024-11-27 06:30:47.534709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.013 [2024-11-27 06:30:47.534734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.013 [2024-11-27 06:30:47.534740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.274 [2024-11-27 06:30:47.537617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.274 [2024-11-27 06:30:47.537709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.274 [2024-11-27 06:30:47.537716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.274 [2024-11-27 06:30:47.540665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.274 [2024-11-27 06:30:47.540691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.274 [2024-11-27 06:30:47.540697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.274 [2024-11-27 06:30:47.543444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.274 [2024-11-27 06:30:47.543469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.274 [2024-11-27 06:30:47.543474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.274 [2024-11-27 06:30:47.546290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.274 [2024-11-27 06:30:47.546316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.274 [2024-11-27 06:30:47.546321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.274 [2024-11-27 06:30:47.549277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.275 [2024-11-27 06:30:47.549304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.275 [2024-11-27 06:30:47.549310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.275 [2024-11-27 06:30:47.552256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.275 [2024-11-27 06:30:47.552281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.275 [2024-11-27 06:30:47.552287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.275 [2024-11-27 06:30:47.555068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.275 [2024-11-27 06:30:47.555093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.275 [2024-11-27 06:30:47.555099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.275 [2024-11-27 06:30:47.558121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.275 [2024-11-27 06:30:47.558146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.275 [2024-11-27 06:30:47.558153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.275 [2024-11-27 06:30:47.561079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.275 [2024-11-27 06:30:47.561105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.275 [2024-11-27 06:30:47.561110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.275 [2024-11-27 06:30:47.563998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.275 [2024-11-27 06:30:47.564022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.275 [2024-11-27 06:30:47.564027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.275 [2024-11-27 06:30:47.566912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.275 [2024-11-27 06:30:47.566935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.275 [2024-11-27 06:30:47.566940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.275 [2024-11-27 06:30:47.569836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.275 [2024-11-27 06:30:47.569940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.275 [2024-11-27 06:30:47.569948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.275 [2024-11-27 06:30:47.572804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.275 [2024-11-27 06:30:47.572830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.275 [2024-11-27 06:30:47.572835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.275 [2024-11-27 06:30:47.575767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.275 [2024-11-27 06:30:47.575793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.275 [2024-11-27 06:30:47.575798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.275 [2024-11-27 06:30:47.578706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.275 [2024-11-27 06:30:47.578732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.275 [2024-11-27 06:30:47.578737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.275 [2024-11-27 06:30:47.581649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.275 [2024-11-27 06:30:47.581739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.275 [2024-11-27 06:30:47.581746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.275 [2024-11-27 06:30:47.584778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.275 [2024-11-27 06:30:47.584803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.275 [2024-11-27 06:30:47.584808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.275 [2024-11-27 06:30:47.587759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.275 [2024-11-27 06:30:47.587784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.275 [2024-11-27 06:30:47.587790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.275 [2024-11-27 06:30:47.590688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.275 [2024-11-27 06:30:47.590715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.275 [2024-11-27 06:30:47.590720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.275 [2024-11-27 06:30:47.593685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.275 [2024-11-27 06:30:47.593777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.275 [2024-11-27 06:30:47.593785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.275 [2024-11-27 06:30:47.596651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.275 [2024-11-27 06:30:47.596676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.275 [2024-11-27 06:30:47.596682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.275 [2024-11-27 06:30:47.599646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.275 [2024-11-27 06:30:47.599672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.275 [2024-11-27 06:30:47.599677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.275 [2024-11-27 06:30:47.602608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.275 [2024-11-27 06:30:47.602634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.275 [2024-11-27 06:30:47.602639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.275 [2024-11-27 06:30:47.605555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.275 [2024-11-27 06:30:47.605581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.275 [2024-11-27 06:30:47.605586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.275 [2024-11-27 06:30:47.608513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.275 [2024-11-27 06:30:47.608538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.276 [2024-11-27 06:30:47.608543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.276 [2024-11-27 06:30:47.611442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.276 [2024-11-27 06:30:47.611468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.276 [2024-11-27 06:30:47.611474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.276 [2024-11-27 06:30:47.614303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.276 [2024-11-27 06:30:47.614329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.276 [2024-11-27 06:30:47.614334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.276 [2024-11-27 06:30:47.617269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.276 [2024-11-27 06:30:47.617295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.276 [2024-11-27 06:30:47.617300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.276 [2024-11-27 06:30:47.620215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.276 [2024-11-27 06:30:47.620241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.276 [2024-11-27 06:30:47.620246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.276 [2024-11-27 06:30:47.623168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.276 [2024-11-27 06:30:47.623192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.276 [2024-11-27 06:30:47.623198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.276 [2024-11-27 06:30:47.626107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.276 [2024-11-27 06:30:47.626134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.276 [2024-11-27 06:30:47.626140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.276 [2024-11-27 06:30:47.629046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.276 [2024-11-27 06:30:47.629072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.276 [2024-11-27 06:30:47.629077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.276 [2024-11-27 06:30:47.631950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.276 [2024-11-27 06:30:47.631974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.276 [2024-11-27 06:30:47.631979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.276 [2024-11-27 06:30:47.634808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.276 [2024-11-27 06:30:47.634834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.276 [2024-11-27 06:30:47.634840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.276 [2024-11-27 06:30:47.637714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.276 [2024-11-27 06:30:47.637740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.276 [2024-11-27 06:30:47.637746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.276 [2024-11-27 06:30:47.640657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.276 [2024-11-27 06:30:47.640759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.276 [2024-11-27 06:30:47.640767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.276 [2024-11-27 06:30:47.643699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.276 [2024-11-27 06:30:47.643725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.276 [2024-11-27 06:30:47.643731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.276 [2024-11-27 06:30:47.646641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.276 [2024-11-27 06:30:47.646667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.276 [2024-11-27 06:30:47.646672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.276 [2024-11-27 06:30:47.649591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.276 [2024-11-27 06:30:47.649618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.276 [2024-11-27 06:30:47.649624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.276 [2024-11-27 06:30:47.652392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.276 [2024-11-27 06:30:47.652418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.276 [2024-11-27 06:30:47.652423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.276 [2024-11-27 06:30:47.655342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.276 [2024-11-27 06:30:47.655367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.276 [2024-11-27 06:30:47.655373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.276 [2024-11-27 06:30:47.658298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.276 [2024-11-27 06:30:47.658325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.276 [2024-11-27 06:30:47.658331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.276 [2024-11-27 06:30:47.661351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.276 [2024-11-27 06:30:47.661378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.276 [2024-11-27 06:30:47.661383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.276 [2024-11-27 06:30:47.664180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.276 [2024-11-27 06:30:47.664206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.276 [2024-11-27 06:30:47.664211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.276 [2024-11-27 06:30:47.667228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.276 [2024-11-27 06:30:47.667254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.276 [2024-11-27 06:30:47.667260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.276 [2024-11-27 06:30:47.670122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.276 [2024-11-27 06:30:47.670148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.276 [2024-11-27 06:30:47.670154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.276 [2024-11-27 06:30:47.673156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.276 [2024-11-27 06:30:47.673182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.276 [2024-11-27 06:30:47.673188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.276 [2024-11-27 06:30:47.676056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.276 [2024-11-27 06:30:47.676080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.277 [2024-11-27 06:30:47.676086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.277 [2024-11-27 06:30:47.679007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.277 [2024-11-27 06:30:47.679032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.277 [2024-11-27 06:30:47.679038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.277 [2024-11-27 06:30:47.682011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.277 [2024-11-27 06:30:47.682037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.277 [2024-11-27 06:30:47.682043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.277 [2024-11-27 06:30:47.685089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.277 [2024-11-27 06:30:47.685116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.277 [2024-11-27 06:30:47.685122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.277 [2024-11-27 06:30:47.688165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.277 [2024-11-27 06:30:47.688191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.277 [2024-11-27 06:30:47.688198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.277 [2024-11-27 06:30:47.691172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.277 [2024-11-27 06:30:47.691198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.277 [2024-11-27 06:30:47.691204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.277 [2024-11-27 06:30:47.694216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.277 [2024-11-27 06:30:47.694243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.277 [2024-11-27 06:30:47.694248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.277 [2024-11-27 06:30:47.697222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.277 [2024-11-27 06:30:47.697248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.277 [2024-11-27 06:30:47.697254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.277 [2024-11-27 06:30:47.700126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.277 [2024-11-27 06:30:47.700152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.277 [2024-11-27 06:30:47.700158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.277 [2024-11-27 06:30:47.703117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.277 [2024-11-27 06:30:47.703144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.277 [2024-11-27 06:30:47.703150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.277 [2024-11-27 06:30:47.706164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.277 [2024-11-27 06:30:47.706191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.277 [2024-11-27 06:30:47.706197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.277 [2024-11-27 06:30:47.709213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.277 [2024-11-27 06:30:47.709244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.277 [2024-11-27 06:30:47.709250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.277 [2024-11-27 06:30:47.712220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.277 [2024-11-27 06:30:47.712246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.277 [2024-11-27 06:30:47.712251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.277 [2024-11-27 06:30:47.715361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.277 [2024-11-27 06:30:47.715388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.277 [2024-11-27 06:30:47.715394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.277 [2024-11-27 06:30:47.718549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.277 [2024-11-27 06:30:47.718575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.277 [2024-11-27 06:30:47.718583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.277 [2024-11-27 06:30:47.721669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.277 [2024-11-27 06:30:47.721772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.277 [2024-11-27 06:30:47.721780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.277 [2024-11-27 06:30:47.724828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.277 [2024-11-27 06:30:47.724854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.277 [2024-11-27 06:30:47.724860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.277 [2024-11-27 06:30:47.727768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.277 [2024-11-27 06:30:47.727800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.277 [2024-11-27 06:30:47.727807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.277 10844.00 IOPS, 1355.50 MiB/s [2024-11-27T06:30:47.807Z] [2024-11-27 06:30:47.731880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.277 [2024-11-27 06:30:47.731918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.277 [2024-11-27 06:30:47.731924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.277 [2024-11-27 06:30:47.735037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.277 [2024-11-27 06:30:47.735064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.277 [2024-11-27 06:30:47.735070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.277 [2024-11-27 06:30:47.738080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.277 [2024-11-27 06:30:47.738107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.277 [2024-11-27 06:30:47.738112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.277 [2024-11-27 06:30:47.741096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.277 [2024-11-27 06:30:47.741122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.277 [2024-11-27 06:30:47.741128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.277 [2024-11-27 06:30:47.744093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.278 [2024-11-27 06:30:47.744117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-27 06:30:47.744123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.278 [2024-11-27 06:30:47.747176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.278 [2024-11-27 06:30:47.747203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-27 06:30:47.747209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.278 [2024-11-27 06:30:47.750390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.278 [2024-11-27 06:30:47.750417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-27 06:30:47.750424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.278 [2024-11-27 06:30:47.753478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.278 [2024-11-27 06:30:47.753579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-27 06:30:47.753586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.278 [2024-11-27 06:30:47.756610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.278 [2024-11-27 06:30:47.756637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-27 06:30:47.756642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.278 [2024-11-27 06:30:47.759655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.278 [2024-11-27 06:30:47.759682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-27 06:30:47.759687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.278 [2024-11-27 06:30:47.762730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.278 [2024-11-27 06:30:47.762758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-27 06:30:47.762763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.278 [2024-11-27 06:30:47.764941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.278 [2024-11-27 06:30:47.764964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-27 06:30:47.764970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.278 [2024-11-27 06:30:47.767550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.278 [2024-11-27 06:30:47.767576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-27 06:30:47.767582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.278 [2024-11-27 06:30:47.770174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.278 [2024-11-27 06:30:47.770200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-27 06:30:47.770206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.278 [2024-11-27 06:30:47.772262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.278 [2024-11-27 06:30:47.772287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-27 06:30:47.772292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.278 [2024-11-27 06:30:47.774440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.278 [2024-11-27 06:30:47.774466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-27 06:30:47.774471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.278 [2024-11-27 06:30:47.776604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.278 [2024-11-27 06:30:47.776630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-27 06:30:47.776635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.278 [2024-11-27 06:30:47.779308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.278 [2024-11-27 06:30:47.779334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-27 06:30:47.779340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.278 [2024-11-27 06:30:47.781528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.278 [2024-11-27 06:30:47.781629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-27 06:30:47.781637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.278 [2024-11-27 06:30:47.784261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.278 [2024-11-27 06:30:47.784288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-27 06:30:47.784294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.278 [2024-11-27 06:30:47.786568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.278 [2024-11-27 06:30:47.786596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-27 06:30:47.786602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.278 [2024-11-27 06:30:47.788752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.278 [2024-11-27 06:30:47.788776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-27 06:30:47.788782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.278 [2024-11-27 06:30:47.791299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.278 [2024-11-27 06:30:47.791324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-27 06:30:47.791330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.278 [2024-11-27 06:30:47.793992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.278 [2024-11-27 06:30:47.794016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-27 06:30:47.794022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.278 [2024-11-27 06:30:47.795877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.278 [2024-11-27 06:30:47.795917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-27 06:30:47.795923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.278 [2024-11-27 06:30:47.798773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.279 [2024-11-27 06:30:47.798801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.279 [2024-11-27 06:30:47.798807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.279 [2024-11-27 06:30:47.800836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.279 [2024-11-27 06:30:47.800861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.279 [2024-11-27 06:30:47.800867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.540 [2024-11-27 06:30:47.803326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.540 [2024-11-27 06:30:47.803351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.540 [2024-11-27 06:30:47.803357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.540 [2024-11-27 06:30:47.806033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.540 [2024-11-27 06:30:47.806058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.540 [2024-11-27 06:30:47.806064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.540 [2024-11-27 06:30:47.807924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.540 [2024-11-27 06:30:47.807946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.540 [2024-11-27 06:30:47.807952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.540 [2024-11-27 06:30:47.810841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.540 [2024-11-27 06:30:47.810869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.540 [2024-11-27 06:30:47.810875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.540 [2024-11-27 06:30:47.812959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.540 [2024-11-27 06:30:47.812983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.540 [2024-11-27 06:30:47.812989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.540 [2024-11-27 06:30:47.815168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.540 [2024-11-27 06:30:47.815193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.540 [2024-11-27 06:30:47.815199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.540 [2024-11-27 06:30:47.817543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.540 [2024-11-27 06:30:47.817644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.540 [2024-11-27 06:30:47.817652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.540 [2024-11-27 06:30:47.819783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.540 [2024-11-27 06:30:47.819804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.540 [2024-11-27 06:30:47.819810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.540 [2024-11-27 06:30:47.821756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.540 [2024-11-27 06:30:47.821854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.540 [2024-11-27 06:30:47.821862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.540 [2024-11-27 06:30:47.824475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.540 [2024-11-27 06:30:47.824502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.540 [2024-11-27 06:30:47.824508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.540 [2024-11-27 06:30:47.827487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.540 [2024-11-27 06:30:47.827514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.540 [2024-11-27 06:30:47.827520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.540 [2024-11-27 06:30:47.829621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.540 [2024-11-27 06:30:47.829715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.540 [2024-11-27 06:30:47.829724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.540 [2024-11-27 06:30:47.832238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.540 [2024-11-27 06:30:47.832265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.540 [2024-11-27 06:30:47.832271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.540 [2024-11-27 06:30:47.834787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.540 [2024-11-27 06:30:47.834814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.540 [2024-11-27 06:30:47.834820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.540 [2024-11-27 06:30:47.837252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.540 [2024-11-27 06:30:47.837278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.540 [2024-11-27 06:30:47.837284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.540 [2024-11-27 06:30:47.839497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.540 [2024-11-27 06:30:47.839522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.540 [2024-11-27 06:30:47.839528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.540 [2024-11-27 06:30:47.841377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.540 [2024-11-27 06:30:47.841404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.540 [2024-11-27 06:30:47.841410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.540 [2024-11-27 06:30:47.844035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.540 [2024-11-27 06:30:47.844060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-27 06:30:47.844065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.541 [2024-11-27 06:30:47.846009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.541 [2024-11-27 06:30:47.846034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-27 06:30:47.846040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.541 [2024-11-27 06:30:47.848799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.541 [2024-11-27 06:30:47.848824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-27 06:30:47.848829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.541 [2024-11-27 06:30:47.850842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.541 [2024-11-27 06:30:47.850867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-27 06:30:47.850873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.541 [2024-11-27 06:30:47.853322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.541 [2024-11-27 06:30:47.853348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-27 06:30:47.853353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.541 [2024-11-27 06:30:47.855604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.541 [2024-11-27 06:30:47.855628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-27 06:30:47.855634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.541 [2024-11-27 06:30:47.857607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.541 [2024-11-27 06:30:47.857633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-27 06:30:47.857639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.541 [2024-11-27 06:30:47.860079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.541 [2024-11-27 06:30:47.860104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-27 06:30:47.860109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.541 [2024-11-27 06:30:47.861929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.541 [2024-11-27 06:30:47.861952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-27 06:30:47.861958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.541 [2024-11-27 06:30:47.864396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.541 [2024-11-27 06:30:47.864421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-27 06:30:47.864426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.541 [2024-11-27 06:30:47.866678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.541 [2024-11-27 06:30:47.866778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-27 06:30:47.866787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.541 [2024-11-27 06:30:47.868781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.541 [2024-11-27 06:30:47.868809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-27 06:30:47.868815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.541 [2024-11-27 06:30:47.870870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.541 [2024-11-27 06:30:47.870978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-27 06:30:47.870987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.541 [2024-11-27 06:30:47.873494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.541 [2024-11-27 06:30:47.873521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-27 06:30:47.873526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.541 [2024-11-27 06:30:47.875476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.541 [2024-11-27 06:30:47.875501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-27 06:30:47.875507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.541 [2024-11-27 06:30:47.877700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.541 [2024-11-27 06:30:47.877726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-27 06:30:47.877732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.541 [2024-11-27 06:30:47.879940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.541 [2024-11-27 06:30:47.879963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-27 06:30:47.879969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.541 [2024-11-27 06:30:47.882245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.541 [2024-11-27 06:30:47.882271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-27 06:30:47.882276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.541 [2024-11-27 06:30:47.884400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.541 [2024-11-27 06:30:47.884491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-27 06:30:47.884498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.541 [2024-11-27 06:30:47.886604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.541 [2024-11-27 06:30:47.886631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-27 06:30:47.886637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.541 [2024-11-27 06:30:47.888869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.541 [2024-11-27 06:30:47.888971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-27 06:30:47.888979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.542 [2024-11-27 06:30:47.890929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.542 [2024-11-27 06:30:47.890951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.542 [2024-11-27 06:30:47.890956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.542 [2024-11-27 06:30:47.893721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.542 [2024-11-27 06:30:47.893748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.542 [2024-11-27 06:30:47.893754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.542 [2024-11-27 06:30:47.896142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.542 [2024-11-27 06:30:47.896168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.542 [2024-11-27 06:30:47.896174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.542 [2024-11-27 06:30:47.898078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.542 [2024-11-27 06:30:47.898104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.542 [2024-11-27 06:30:47.898110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.542 [2024-11-27 06:30:47.900682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.542 [2024-11-27 06:30:47.900708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.542 [2024-11-27 06:30:47.900713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.542 [2024-11-27 06:30:47.903338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.542 [2024-11-27 06:30:47.903436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.542 [2024-11-27 06:30:47.903485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.542 [2024-11-27 06:30:47.905544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.542 [2024-11-27 06:30:47.905571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.542 [2024-11-27 06:30:47.905577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.542 [2024-11-27 06:30:47.907931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.542 [2024-11-27 06:30:47.907955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.542 [2024-11-27 06:30:47.907961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.542 [2024-11-27 06:30:47.910312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.542 [2024-11-27 06:30:47.910339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.542 [2024-11-27 06:30:47.910344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.542 [2024-11-27 06:30:47.912203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.542 [2024-11-27 06:30:47.912229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.542 [2024-11-27 06:30:47.912235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.542 [2024-11-27 06:30:47.914901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.542 [2024-11-27 06:30:47.914936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.542 [2024-11-27 06:30:47.914943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.542 [2024-11-27 06:30:47.917066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.542 [2024-11-27 06:30:47.917090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.542 [2024-11-27 06:30:47.917096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.542 [2024-11-27 06:30:47.919527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.542 [2024-11-27 06:30:47.919553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.542 [2024-11-27 06:30:47.919559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.542 [2024-11-27 06:30:47.922188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.542 [2024-11-27 06:30:47.922214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.542 [2024-11-27 06:30:47.922220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.542 [2024-11-27 06:30:47.924089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.542 [2024-11-27 06:30:47.924114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.542 [2024-11-27 06:30:47.924120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.542 [2024-11-27 06:30:47.926758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.542 [2024-11-27 06:30:47.926785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.542 [2024-11-27 06:30:47.926792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.542 [2024-11-27 06:30:47.928936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.542 [2024-11-27 06:30:47.928960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.542 [2024-11-27 06:30:47.928966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.542 [2024-11-27 06:30:47.931547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.542 [2024-11-27 06:30:47.931573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.542 [2024-11-27 06:30:47.931579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.542 [2024-11-27 06:30:47.934258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.542 [2024-11-27 06:30:47.934284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.542 [2024-11-27 06:30:47.934289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.542 [2024-11-27 06:30:47.936201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.542 [2024-11-27 06:30:47.936228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.542 [2024-11-27 06:30:47.936234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.542 [2024-11-27 06:30:47.939149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.542 [2024-11-27 06:30:47.939176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.542 [2024-11-27 06:30:47.939182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.542 [2024-11-27 06:30:47.941273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.542 [2024-11-27 06:30:47.941299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.543 [2024-11-27 06:30:47.941305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.543 [2024-11-27 06:30:47.943929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.543 [2024-11-27 06:30:47.943952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.543 [2024-11-27 06:30:47.943958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.543 [2024-11-27 06:30:47.946842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.543 [2024-11-27 06:30:47.946870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.543 [2024-11-27 06:30:47.946876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.543 [2024-11-27 06:30:47.949893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.543 [2024-11-27 06:30:47.950004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.543 [2024-11-27 06:30:47.950013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.543 [2024-11-27 06:30:47.952980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.543 [2024-11-27 06:30:47.953005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.543 [2024-11-27 06:30:47.953011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.543 [2024-11-27 06:30:47.955879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.543 [2024-11-27 06:30:47.955917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.543 [2024-11-27 06:30:47.955923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.543 [2024-11-27 06:30:47.958949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.543 [2024-11-27 06:30:47.958974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.543 [2024-11-27 06:30:47.958980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.543 [2024-11-27 06:30:47.961923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.543 [2024-11-27 06:30:47.961948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.543 [2024-11-27 06:30:47.961954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.543 [2024-11-27 06:30:47.964918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.543 [2024-11-27 06:30:47.964941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.543 [2024-11-27 06:30:47.964947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.543 [2024-11-27 06:30:47.967796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.543 [2024-11-27 06:30:47.967822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.543 [2024-11-27 06:30:47.967828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.543 [2024-11-27 06:30:47.970753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.543 [2024-11-27 06:30:47.970781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.543 [2024-11-27 06:30:47.970786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.543 [2024-11-27 06:30:47.973771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.543 [2024-11-27 06:30:47.973914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.543 [2024-11-27 06:30:47.973923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.543 [2024-11-27 06:30:47.976923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.543 [2024-11-27 06:30:47.976948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.543 [2024-11-27 06:30:47.976954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.543 [2024-11-27 06:30:47.980039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.543 [2024-11-27 06:30:47.980064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.543 [2024-11-27 06:30:47.980069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.543 [2024-11-27 06:30:47.983037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.543 [2024-11-27 06:30:47.983064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.543 [2024-11-27 06:30:47.983069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.543 [2024-11-27 06:30:47.986149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.543 [2024-11-27 06:30:47.986176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.543 [2024-11-27 06:30:47.986182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.543 [2024-11-27 06:30:47.989336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.543 [2024-11-27 06:30:47.989363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.543 [2024-11-27 06:30:47.989369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.543 [2024-11-27 06:30:47.992386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.543 [2024-11-27 06:30:47.992413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.543 [2024-11-27 06:30:47.992419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.543 [2024-11-27 06:30:47.995373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.543 [2024-11-27 06:30:47.995400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.543 [2024-11-27 06:30:47.995406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.543 [2024-11-27 06:30:47.998482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.543 [2024-11-27 06:30:47.998510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.543 [2024-11-27 06:30:47.998516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.543 [2024-11-27 06:30:48.001508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.543 [2024-11-27 06:30:48.001604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.543 [2024-11-27 06:30:48.001612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.543 [2024-11-27 06:30:48.004602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.543 [2024-11-27 06:30:48.004629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.543 [2024-11-27 06:30:48.004635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.544 [2024-11-27 06:30:48.007437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.544 [2024-11-27 06:30:48.007463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.544 [2024-11-27 06:30:48.007469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.544 [2024-11-27 06:30:48.010455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.544 [2024-11-27 06:30:48.010481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.544 [2024-11-27 06:30:48.010487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.544 [2024-11-27 06:30:48.013477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.544 [2024-11-27 06:30:48.013504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.544 [2024-11-27 06:30:48.013510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.544 [2024-11-27 06:30:48.016244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.544 [2024-11-27 06:30:48.016270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.544 [2024-11-27 06:30:48.016276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.544 [2024-11-27 06:30:48.019187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.544 [2024-11-27 06:30:48.019214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.544 [2024-11-27 06:30:48.019219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.544 [2024-11-27 06:30:48.022216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.544 [2024-11-27 06:30:48.022243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.544 [2024-11-27 06:30:48.022248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.544 [2024-11-27 06:30:48.025292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.544 [2024-11-27 06:30:48.025322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.544 [2024-11-27 06:30:48.025329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.544 [2024-11-27 06:30:48.028255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.544 [2024-11-27 06:30:48.028282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.544 [2024-11-27 06:30:48.028288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.544 [2024-11-27 06:30:48.031318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.544 [2024-11-27 06:30:48.031346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.544 [2024-11-27 06:30:48.031352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.544 [2024-11-27 06:30:48.034196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.544 [2024-11-27 06:30:48.034223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.544 [2024-11-27 06:30:48.034229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.544 [2024-11-27 06:30:48.037203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.544 [2024-11-27 06:30:48.037230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.544 [2024-11-27 06:30:48.037236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.544 [2024-11-27 06:30:48.040208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.544 [2024-11-27 06:30:48.040234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.544 [2024-11-27 06:30:48.040240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.544 [2024-11-27 06:30:48.043206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.544 [2024-11-27 06:30:48.043231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.544 [2024-11-27 06:30:48.043237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.544 [2024-11-27 06:30:48.046228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.544 [2024-11-27 06:30:48.046255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.544 [2024-11-27 06:30:48.046261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.544 [2024-11-27 06:30:48.049311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.544 [2024-11-27 06:30:48.049338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.544 [2024-11-27 06:30:48.049344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.544 [2024-11-27 06:30:48.052425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.544 [2024-11-27 06:30:48.052450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.544 [2024-11-27 06:30:48.052456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.544 [2024-11-27 06:30:48.055384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.544 [2024-11-27 06:30:48.055410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.544 [2024-11-27 06:30:48.055416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.544 [2024-11-27 06:30:48.058457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.544 [2024-11-27 06:30:48.058484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.544 [2024-11-27 06:30:48.058489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.544 [2024-11-27 06:30:48.061610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.544 [2024-11-27 06:30:48.061638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.544 [2024-11-27 06:30:48.061644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.544 [2024-11-27 06:30:48.064698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.544 [2024-11-27 06:30:48.064730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.544 [2024-11-27 06:30:48.064737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.805 [2024-11-27 06:30:48.067774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.806 [2024-11-27 06:30:48.067884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-27 06:30:48.067896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.806 [2024-11-27 06:30:48.070987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.806 [2024-11-27 06:30:48.071015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-27 06:30:48.071022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.806 [2024-11-27 06:30:48.073952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.806 [2024-11-27 06:30:48.073985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-27 06:30:48.073991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.806 [2024-11-27 06:30:48.077074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.806 [2024-11-27 06:30:48.077101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-27 06:30:48.077108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.806 [2024-11-27 06:30:48.080178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.806 [2024-11-27 06:30:48.080205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-27 06:30:48.080212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.806 [2024-11-27 06:30:48.083173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.806 [2024-11-27 06:30:48.083199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-27 06:30:48.083205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.806 [2024-11-27 06:30:48.086153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.806 [2024-11-27 06:30:48.086180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-27 06:30:48.086185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.806 [2024-11-27 06:30:48.089159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.806 [2024-11-27 06:30:48.089185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-27 06:30:48.089190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.806 [2024-11-27 06:30:48.092169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.806 [2024-11-27 06:30:48.092196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-27 06:30:48.092202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.806 [2024-11-27 06:30:48.095131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.806 [2024-11-27 06:30:48.095158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-27 06:30:48.095164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.806 [2024-11-27 06:30:48.098118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.806 [2024-11-27 06:30:48.098144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-27 06:30:48.098150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.806 [2024-11-27 06:30:48.101084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.806 [2024-11-27 06:30:48.101110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-27 06:30:48.101116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.806 [2024-11-27 06:30:48.104059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.806 [2024-11-27 06:30:48.104084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-27 06:30:48.104090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.806 [2024-11-27 06:30:48.107034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.806 [2024-11-27 06:30:48.107059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-27 06:30:48.107065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.806 [2024-11-27 06:30:48.110035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.806 [2024-11-27 06:30:48.110060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-27 06:30:48.110066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.806 [2024-11-27 06:30:48.113017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.806 [2024-11-27 06:30:48.113042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-27 06:30:48.113048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.806 [2024-11-27 06:30:48.116125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.806 [2024-11-27 06:30:48.116151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-27 06:30:48.116158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.806 [2024-11-27 06:30:48.119147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.806 [2024-11-27 06:30:48.119173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-27 06:30:48.119178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.806 [2024-11-27 06:30:48.122215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.806 [2024-11-27 06:30:48.122242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-27 06:30:48.122248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.806 [2024-11-27 06:30:48.125223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.806 [2024-11-27 06:30:48.125249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-27 06:30:48.125255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.806 [2024-11-27 06:30:48.128225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.806 [2024-11-27 06:30:48.128250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.806 [2024-11-27 06:30:48.128256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.807 [2024-11-27 06:30:48.131254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.807 [2024-11-27 06:30:48.131281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.807 [2024-11-27 06:30:48.131286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.807 [2024-11-27 06:30:48.134247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.807 [2024-11-27 06:30:48.134274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.807 [2024-11-27 06:30:48.134281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.807 [2024-11-27 06:30:48.137182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.807 [2024-11-27 06:30:48.137209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.807 [2024-11-27 06:30:48.137215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.807 [2024-11-27 06:30:48.139938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.807 [2024-11-27 06:30:48.139960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.807 [2024-11-27 06:30:48.139966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.807 [2024-11-27 06:30:48.142946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.807 [2024-11-27 06:30:48.142970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.807 [2024-11-27 06:30:48.142976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.807 [2024-11-27 06:30:48.145951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.807 [2024-11-27 06:30:48.145975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.807 [2024-11-27 06:30:48.145981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.807 [2024-11-27 06:30:48.148947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.807 [2024-11-27 06:30:48.148971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.807 [2024-11-27 06:30:48.148977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.807 [2024-11-27 06:30:48.151919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.807 [2024-11-27 06:30:48.151942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.807 [2024-11-27 06:30:48.151948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.807 [2024-11-27 06:30:48.154896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.807 [2024-11-27 06:30:48.154932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.807 [2024-11-27 06:30:48.154938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.807 [2024-11-27 06:30:48.157844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.807 [2024-11-27 06:30:48.157870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.807 [2024-11-27 06:30:48.157876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.807 [2024-11-27 06:30:48.160836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.807 [2024-11-27 06:30:48.160948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.807 [2024-11-27 06:30:48.160956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.807 [2024-11-27 06:30:48.163901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.807 [2024-11-27 06:30:48.163936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.807 [2024-11-27 06:30:48.163942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.807 [2024-11-27 06:30:48.166952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.807 [2024-11-27 06:30:48.166975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.807 [2024-11-27 06:30:48.166980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.807 [2024-11-27 06:30:48.170018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.807 [2024-11-27 06:30:48.170044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.807 [2024-11-27 06:30:48.170050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.807 [2024-11-27 06:30:48.173047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.807 [2024-11-27 06:30:48.173073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.807 [2024-11-27 06:30:48.173078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.807 [2024-11-27 06:30:48.176074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.807 [2024-11-27 06:30:48.176101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.807 [2024-11-27 06:30:48.176107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.807 [2024-11-27 06:30:48.179097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.807 [2024-11-27 06:30:48.179123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.807 [2024-11-27 06:30:48.179128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.807 [2024-11-27 06:30:48.182117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.807 [2024-11-27 06:30:48.182144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.807 [2024-11-27 06:30:48.182149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.807 [2024-11-27 06:30:48.184957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.807 [2024-11-27 06:30:48.184980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.807 [2024-11-27 06:30:48.184986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.807 [2024-11-27 06:30:48.187869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.807 [2024-11-27 06:30:48.187896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.807 [2024-11-27 06:30:48.187912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.807 [2024-11-27 06:30:48.190795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.807 [2024-11-27 06:30:48.190820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.807 [2024-11-27 06:30:48.190826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.807 [2024-11-27 06:30:48.193776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.808 [2024-11-27 06:30:48.193804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.808 [2024-11-27 06:30:48.193811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.808 [2024-11-27 06:30:48.196854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.808 [2024-11-27 06:30:48.196961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.808 [2024-11-27 06:30:48.196970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.808 [2024-11-27 06:30:48.200044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.808 [2024-11-27 06:30:48.200070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.808 [2024-11-27 06:30:48.200076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.808 [2024-11-27 06:30:48.203074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.808 [2024-11-27 06:30:48.203101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.808 [2024-11-27 06:30:48.203107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.808 [2024-11-27 06:30:48.206099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.808 [2024-11-27 06:30:48.206125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.808 [2024-11-27 06:30:48.206131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.808 [2024-11-27 06:30:48.209178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.808 [2024-11-27 06:30:48.209204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.808 [2024-11-27 06:30:48.209210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.808 [2024-11-27 06:30:48.212113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.808 [2024-11-27 06:30:48.212139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.808 [2024-11-27 06:30:48.212144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.808 [2024-11-27 06:30:48.215046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.808 [2024-11-27 06:30:48.215072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.808 [2024-11-27 06:30:48.215078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.808 [2024-11-27 06:30:48.218139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.808 [2024-11-27 06:30:48.218166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.808 [2024-11-27 06:30:48.218172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.808 [2024-11-27 06:30:48.221125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.808 [2024-11-27 06:30:48.221152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.808 [2024-11-27 06:30:48.221157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.808 [2024-11-27 06:30:48.224225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.808 [2024-11-27 06:30:48.224252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.808 [2024-11-27 06:30:48.224259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.808 [2024-11-27 06:30:48.227172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.808 [2024-11-27 06:30:48.227198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.808 [2024-11-27 06:30:48.227204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.808 [2024-11-27 06:30:48.230180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.808 [2024-11-27 06:30:48.230207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.808 [2024-11-27 06:30:48.230214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.808 [2024-11-27 06:30:48.233318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.808 [2024-11-27 06:30:48.233346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.808 [2024-11-27 06:30:48.233353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.808 [2024-11-27 06:30:48.236344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.808 [2024-11-27 06:30:48.236370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.808 [2024-11-27 06:30:48.236376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.808 [2024-11-27 06:30:48.239489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.808 [2024-11-27 06:30:48.239514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.808 [2024-11-27 06:30:48.239520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.808 [2024-11-27 06:30:48.242448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.808 [2024-11-27 06:30:48.242554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.808 [2024-11-27 06:30:48.242561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.808 [2024-11-27 06:30:48.245469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.808 [2024-11-27 06:30:48.245493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.808 [2024-11-27 06:30:48.245499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.808 [2024-11-27 06:30:48.248384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.808 [2024-11-27 06:30:48.248410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.808 [2024-11-27 06:30:48.248416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.808 [2024-11-27 06:30:48.251363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.808 [2024-11-27 06:30:48.251391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.808 [2024-11-27 06:30:48.251399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.808 [2024-11-27 06:30:48.254410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.808 [2024-11-27 06:30:48.254437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.808 [2024-11-27 06:30:48.254443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.808 [2024-11-27 06:30:48.257513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.808 [2024-11-27 06:30:48.257612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.808 [2024-11-27 06:30:48.257619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.808 [2024-11-27 06:30:48.260499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.808 [2024-11-27 06:30:48.260526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.808 [2024-11-27 06:30:48.260532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.809 [2024-11-27 06:30:48.263512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.809 [2024-11-27 06:30:48.263539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-27 06:30:48.263545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.809 [2024-11-27 06:30:48.266492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.809 [2024-11-27 06:30:48.266520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-27 06:30:48.266525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.809 [2024-11-27 06:30:48.269476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.809 [2024-11-27 06:30:48.269573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-27 06:30:48.269581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.809 [2024-11-27 06:30:48.272491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.809 [2024-11-27 06:30:48.272517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-27 06:30:48.272522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.809 [2024-11-27 06:30:48.275505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.809 [2024-11-27 06:30:48.275531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-27 06:30:48.275536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.809 [2024-11-27 06:30:48.278314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.809 [2024-11-27 06:30:48.278339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-27 06:30:48.278344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.809 [2024-11-27 06:30:48.281320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.809 [2024-11-27 06:30:48.281346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-27 06:30:48.281352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.809 [2024-11-27 06:30:48.284318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.809 [2024-11-27 06:30:48.284344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-27 06:30:48.284349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.809 [2024-11-27 06:30:48.287307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.809 [2024-11-27 06:30:48.287334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-27 06:30:48.287339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.809 [2024-11-27 06:30:48.290348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.809 [2024-11-27 06:30:48.290374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-27 06:30:48.290380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.809 [2024-11-27 06:30:48.293364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.809 [2024-11-27 06:30:48.293391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-27 06:30:48.293396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.809 [2024-11-27 06:30:48.296382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.809 [2024-11-27 06:30:48.296408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-27 06:30:48.296413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.809 [2024-11-27 06:30:48.299381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.809 [2024-11-27 06:30:48.299407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-27 06:30:48.299413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.809 [2024-11-27 06:30:48.302373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.809 [2024-11-27 06:30:48.302400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-27 06:30:48.302406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.809 [2024-11-27 06:30:48.305205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.809 [2024-11-27 06:30:48.305231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-27 06:30:48.305237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.809 [2024-11-27 06:30:48.308227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.809 [2024-11-27 06:30:48.308253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-27 06:30:48.308258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.809 [2024-11-27 06:30:48.311400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.809 [2024-11-27 06:30:48.311426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-27 06:30:48.311432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.809 [2024-11-27 06:30:48.314403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.809 [2024-11-27 06:30:48.314429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-27 06:30:48.314434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.809 [2024-11-27 06:30:48.317373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.809 [2024-11-27 06:30:48.317399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-27 06:30:48.317405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.809 [2024-11-27 06:30:48.320434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.809 [2024-11-27 06:30:48.320460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-27 06:30:48.320465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.809 [2024-11-27 06:30:48.323417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.809 [2024-11-27 06:30:48.323443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.809 [2024-11-27 06:30:48.323449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.809 [2024-11-27 06:30:48.326436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.809 [2024-11-27 06:30:48.326462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.810 [2024-11-27 06:30:48.326468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.810 [2024-11-27 06:30:48.329519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.810 [2024-11-27 06:30:48.329618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.810 [2024-11-27 06:30:48.329626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.810 [2024-11-27 06:30:48.332646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:28.810 [2024-11-27 06:30:48.332673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.810 [2024-11-27 06:30:48.332678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.070 [2024-11-27 06:30:48.335624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.070 [2024-11-27 06:30:48.335649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.070 [2024-11-27 06:30:48.335655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.070 [2024-11-27 06:30:48.338603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.070 [2024-11-27 06:30:48.338629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.070 [2024-11-27 06:30:48.338635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.070 [2024-11-27 06:30:48.341687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.070 [2024-11-27 06:30:48.341784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.070 [2024-11-27 06:30:48.341792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.070 [2024-11-27 06:30:48.344799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.070 [2024-11-27 06:30:48.344825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.070 [2024-11-27 06:30:48.344830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.070 [2024-11-27 06:30:48.347754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.070 [2024-11-27 06:30:48.347780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.070 [2024-11-27 06:30:48.347785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.070 [2024-11-27 06:30:48.350890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.070 [2024-11-27 06:30:48.350924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.070 [2024-11-27 06:30:48.350930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.070 [2024-11-27 06:30:48.353856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.070 [2024-11-27 06:30:48.353964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.070 [2024-11-27 06:30:48.353971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.070 [2024-11-27 06:30:48.356785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.070 [2024-11-27 06:30:48.356806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.070 [2024-11-27 06:30:48.356811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.070 [2024-11-27 06:30:48.359749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.070 [2024-11-27 06:30:48.359775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.070 [2024-11-27 06:30:48.359781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.070 [2024-11-27 06:30:48.362755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.070 [2024-11-27 06:30:48.362782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.070 [2024-11-27 06:30:48.362789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.070 [2024-11-27 06:30:48.365745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.070 [2024-11-27 06:30:48.365844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.070 [2024-11-27 06:30:48.365852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.070 [2024-11-27 06:30:48.368803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.070 [2024-11-27 06:30:48.368828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.070 [2024-11-27 06:30:48.368834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.070 [2024-11-27 06:30:48.371884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.070 [2024-11-27 06:30:48.371924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.070 [2024-11-27 06:30:48.371930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.070 [2024-11-27 06:30:48.374852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.070 [2024-11-27 06:30:48.374880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.070 [2024-11-27 06:30:48.374886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.070 [2024-11-27 06:30:48.377948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.070 [2024-11-27 06:30:48.377973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.070 [2024-11-27 06:30:48.377980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.070 [2024-11-27 06:30:48.380966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.070 [2024-11-27 06:30:48.380990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.070 [2024-11-27 06:30:48.380996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.071 [2024-11-27 06:30:48.384142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.071 [2024-11-27 06:30:48.384168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.071 [2024-11-27 06:30:48.384173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.071 [2024-11-27 06:30:48.387168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.071 [2024-11-27 06:30:48.387195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.071 [2024-11-27 06:30:48.387200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.071 [2024-11-27 06:30:48.390028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.071 [2024-11-27 06:30:48.390053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.071 [2024-11-27 06:30:48.390058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.071 [2024-11-27 06:30:48.392997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.071 [2024-11-27 06:30:48.393023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.071 [2024-11-27 06:30:48.393029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.071 [2024-11-27 06:30:48.395727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.071 [2024-11-27 06:30:48.395753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.071 [2024-11-27 06:30:48.395758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.071 [2024-11-27 06:30:48.398754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.071 [2024-11-27 06:30:48.398780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.071 [2024-11-27 06:30:48.398786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.071 [2024-11-27 06:30:48.401825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.071 [2024-11-27 06:30:48.401933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.071 [2024-11-27 06:30:48.401940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.071 [2024-11-27 06:30:48.404935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.071 [2024-11-27 06:30:48.404955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.071 [2024-11-27 06:30:48.404961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.071 [2024-11-27 06:30:48.407926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.071 [2024-11-27 06:30:48.407950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.071 [2024-11-27 06:30:48.407955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.071 [2024-11-27 06:30:48.410965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.071 [2024-11-27 06:30:48.410991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.071 [2024-11-27 06:30:48.410997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.071 [2024-11-27 06:30:48.413708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.071 [2024-11-27 06:30:48.413804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.071 [2024-11-27 06:30:48.413812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.071 [2024-11-27 06:30:48.416759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.071 [2024-11-27 06:30:48.416785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.071 [2024-11-27 06:30:48.416791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.071 [2024-11-27 06:30:48.419740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.071 [2024-11-27 06:30:48.419766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.071 [2024-11-27 06:30:48.419772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.071 [2024-11-27 06:30:48.422819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.071 [2024-11-27 06:30:48.422847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.071 [2024-11-27 06:30:48.422852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.071 [2024-11-27 06:30:48.425786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.071 [2024-11-27 06:30:48.425886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.071 [2024-11-27 06:30:48.425894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.071 [2024-11-27 06:30:48.428981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.071 [2024-11-27 06:30:48.429006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.071 [2024-11-27 06:30:48.429011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.071 [2024-11-27 06:30:48.432085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.071 [2024-11-27 06:30:48.432112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.071 [2024-11-27 06:30:48.432119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.071 [2024-11-27 06:30:48.435216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.071 [2024-11-27 06:30:48.435242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.071 [2024-11-27 06:30:48.435249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.071 [2024-11-27 06:30:48.438403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.071 [2024-11-27 06:30:48.438431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.071 [2024-11-27 06:30:48.438437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.071 [2024-11-27 06:30:48.441569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.071 [2024-11-27 06:30:48.441667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.071 [2024-11-27 06:30:48.441674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.071 [2024-11-27 06:30:48.444636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.071 [2024-11-27 06:30:48.444658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.071 [2024-11-27 06:30:48.444664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.071 [2024-11-27 06:30:48.447643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.071 [2024-11-27 06:30:48.447669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.071 [2024-11-27 06:30:48.447675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.071 [2024-11-27 06:30:48.450670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.071 [2024-11-27 06:30:48.450696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.071 [2024-11-27 06:30:48.450702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.071 [2024-11-27 06:30:48.453730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.071 [2024-11-27 06:30:48.453826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.071 [2024-11-27 06:30:48.453833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.071 [2024-11-27 06:30:48.456811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.071 [2024-11-27 06:30:48.456836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.071 [2024-11-27 06:30:48.456842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.071 [2024-11-27 06:30:48.459802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.071 [2024-11-27 06:30:48.459828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.071 [2024-11-27 06:30:48.459834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.071 [2024-11-27 06:30:48.462831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.071 [2024-11-27 06:30:48.462858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.071 [2024-11-27 06:30:48.462864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.071 [2024-11-27 06:30:48.465819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.072 [2024-11-27 06:30:48.465931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.072 [2024-11-27 06:30:48.465939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.072 [2024-11-27 06:30:48.468862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.072 [2024-11-27 06:30:48.468884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.072 [2024-11-27 06:30:48.468890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.072 [2024-11-27 06:30:48.471820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.072 [2024-11-27 06:30:48.471846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.072 [2024-11-27 06:30:48.471851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.072 [2024-11-27 06:30:48.474826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.072 [2024-11-27 06:30:48.474854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.072 [2024-11-27 06:30:48.474860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.072 [2024-11-27 06:30:48.477860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.072 [2024-11-27 06:30:48.477971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.072 [2024-11-27 06:30:48.477978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.072 [2024-11-27 06:30:48.480948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.072 [2024-11-27 06:30:48.480973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.072 [2024-11-27 06:30:48.480979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.072 [2024-11-27 06:30:48.483942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.072 [2024-11-27 06:30:48.483966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.072 [2024-11-27 06:30:48.483972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.072 [2024-11-27 06:30:48.487042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.072 [2024-11-27 06:30:48.487068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.072 [2024-11-27 06:30:48.487074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.072 [2024-11-27 06:30:48.490041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.072 [2024-11-27 06:30:48.490075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.072 [2024-11-27 06:30:48.490081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.072 [2024-11-27 06:30:48.493014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.072 [2024-11-27 06:30:48.493039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.072 [2024-11-27 06:30:48.493044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.072 [2024-11-27 06:30:48.496044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.072 [2024-11-27 06:30:48.496069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.072 [2024-11-27 06:30:48.496075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.072 [2024-11-27 06:30:48.499133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.072 [2024-11-27 06:30:48.499159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.072 [2024-11-27 06:30:48.499165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.072 [2024-11-27 06:30:48.501960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.072 [2024-11-27 06:30:48.501984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.072 [2024-11-27 06:30:48.501990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.072 [2024-11-27 06:30:48.504862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.072 [2024-11-27 06:30:48.504887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.072 [2024-11-27 06:30:48.504893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.072 [2024-11-27 06:30:48.507874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.072 [2024-11-27 06:30:48.507899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.072 [2024-11-27 06:30:48.507918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.072 [2024-11-27 06:30:48.510927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.072 [2024-11-27 06:30:48.510951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.072 [2024-11-27 06:30:48.510957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.072 [2024-11-27 06:30:48.513875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.072 [2024-11-27 06:30:48.513988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.072 [2024-11-27 06:30:48.513995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.072 [2024-11-27 06:30:48.517037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.072 [2024-11-27 06:30:48.517063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.072 [2024-11-27 06:30:48.517069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.072 [2024-11-27 06:30:48.520039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.072 [2024-11-27 06:30:48.520065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.072 [2024-11-27 06:30:48.520070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.072 [2024-11-27 06:30:48.523079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.072 [2024-11-27 06:30:48.523106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.072 [2024-11-27 06:30:48.523112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.072 [2024-11-27 06:30:48.526011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.072 [2024-11-27 06:30:48.526037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.072 [2024-11-27 06:30:48.526043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.072 [2024-11-27 06:30:48.529018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.072 [2024-11-27 06:30:48.529043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.072 [2024-11-27 06:30:48.529049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.072 [2024-11-27 06:30:48.532068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.072 [2024-11-27 06:30:48.532094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.072 [2024-11-27 06:30:48.532099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.072 [2024-11-27 06:30:48.535047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.072 [2024-11-27 06:30:48.535073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.072 [2024-11-27 06:30:48.535079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.072 [2024-11-27 06:30:48.538013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.072 [2024-11-27 06:30:48.538038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.072 [2024-11-27 06:30:48.538044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.072 [2024-11-27 06:30:48.540896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.072 [2024-11-27 06:30:48.540931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.072 [2024-11-27 06:30:48.540937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.072 [2024-11-27 06:30:48.543886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.072 [2024-11-27 06:30:48.543924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.072 [2024-11-27 06:30:48.543931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.072 [2024-11-27 06:30:48.546868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.073 [2024-11-27 06:30:48.546896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.073 [2024-11-27 06:30:48.546910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.073 [2024-11-27 06:30:48.549870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.073 [2024-11-27 06:30:48.549979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.073 [2024-11-27 06:30:48.549986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.073 [2024-11-27 06:30:48.552995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.073 [2024-11-27 06:30:48.553020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.073 [2024-11-27 06:30:48.553026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.073 [2024-11-27 06:30:48.556010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.073 [2024-11-27 06:30:48.556035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.073 [2024-11-27 06:30:48.556041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.073 [2024-11-27 06:30:48.559087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.073 [2024-11-27 06:30:48.559113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.073 [2024-11-27 06:30:48.559120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.073 [2024-11-27 06:30:48.562074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.073 [2024-11-27 06:30:48.562099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.073 [2024-11-27 06:30:48.562105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.073 [2024-11-27 06:30:48.565096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.073 [2024-11-27 06:30:48.565123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.073 [2024-11-27 06:30:48.565128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.073 [2024-11-27 06:30:48.568178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.073 [2024-11-27 06:30:48.568204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.073 [2024-11-27 06:30:48.568209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.073 [2024-11-27 06:30:48.571233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.073 [2024-11-27 06:30:48.571260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.073 [2024-11-27 06:30:48.571266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.073 [2024-11-27 06:30:48.574285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.073 [2024-11-27 06:30:48.574313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.073 [2024-11-27 06:30:48.574319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.073 [2024-11-27 06:30:48.577356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.073 [2024-11-27 06:30:48.577383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.073 [2024-11-27 06:30:48.577388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.073 [2024-11-27 06:30:48.580350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.073 [2024-11-27 06:30:48.580376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.073 [2024-11-27 06:30:48.580382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.073 [2024-11-27 06:30:48.583351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.073 [2024-11-27 06:30:48.583379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.073 [2024-11-27 06:30:48.583385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.073 [2024-11-27 06:30:48.586350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.073 [2024-11-27 06:30:48.586377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.073 [2024-11-27 06:30:48.586382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.073 [2024-11-27 06:30:48.589421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.073 [2024-11-27 06:30:48.589449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.073 [2024-11-27 06:30:48.589454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.073 [2024-11-27 06:30:48.592565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.073 [2024-11-27 06:30:48.592591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.073 [2024-11-27 06:30:48.592596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.073 [2024-11-27 06:30:48.595573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.073 [2024-11-27 06:30:48.595599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.073 [2024-11-27 06:30:48.595605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.332 [2024-11-27 06:30:48.598656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.333 [2024-11-27 06:30:48.598683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-11-27 06:30:48.598689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.333 [2024-11-27 06:30:48.601676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.333 [2024-11-27 06:30:48.601778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-11-27 06:30:48.601785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.333 [2024-11-27 06:30:48.604760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.333 [2024-11-27 06:30:48.604785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-11-27 06:30:48.604791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.333 [2024-11-27 06:30:48.607791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.333 [2024-11-27 06:30:48.607820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-11-27 06:30:48.607827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.333 [2024-11-27 06:30:48.610768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.333 [2024-11-27 06:30:48.610794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-11-27 06:30:48.610800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.333 [2024-11-27 06:30:48.613816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.333 [2024-11-27 06:30:48.613930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-11-27 06:30:48.613938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.333 [2024-11-27 06:30:48.616931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.333 [2024-11-27 06:30:48.616954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-11-27 06:30:48.616960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.333 [2024-11-27 06:30:48.619958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.333 [2024-11-27 06:30:48.619982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-11-27 06:30:48.619987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.333 [2024-11-27 06:30:48.623097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.333 [2024-11-27 06:30:48.623124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-11-27 06:30:48.623130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.333 [2024-11-27 06:30:48.626142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.333 [2024-11-27 06:30:48.626168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-11-27 06:30:48.626173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.333 [2024-11-27 06:30:48.629149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.333 [2024-11-27 06:30:48.629175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-11-27 06:30:48.629181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.333 [2024-11-27 06:30:48.632141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.333 [2024-11-27 06:30:48.632167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-11-27 06:30:48.632174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.333 [2024-11-27 06:30:48.635173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.333 [2024-11-27 06:30:48.635201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-11-27 06:30:48.635207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.333 [2024-11-27 06:30:48.638195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.333 [2024-11-27 06:30:48.638222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-11-27 06:30:48.638227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.333 [2024-11-27 06:30:48.641188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.333 [2024-11-27 06:30:48.641218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-11-27 06:30:48.641225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.333 [2024-11-27 06:30:48.644278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.333 [2024-11-27 06:30:48.644305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-11-27 06:30:48.644310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.333 [2024-11-27 06:30:48.647264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.333 [2024-11-27 06:30:48.647291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-11-27 06:30:48.647296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.333 [2024-11-27 06:30:48.650300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.333 [2024-11-27 06:30:48.650327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-11-27 06:30:48.650333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.333 [2024-11-27 06:30:48.653243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.333 [2024-11-27 06:30:48.653270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-11-27 06:30:48.653276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.333 [2024-11-27 06:30:48.656366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.333 [2024-11-27 06:30:48.656394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-11-27 06:30:48.656401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.333 [2024-11-27 06:30:48.659348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.333 [2024-11-27 06:30:48.659375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-11-27 06:30:48.659380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.333 [2024-11-27 06:30:48.662421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.333 [2024-11-27 06:30:48.662449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-11-27 06:30:48.662455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.333 [2024-11-27 06:30:48.665437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.333 [2024-11-27 06:30:48.665466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-11-27 06:30:48.665472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.333 [2024-11-27 06:30:48.668459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.333 [2024-11-27 06:30:48.668484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-11-27 06:30:48.668490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.333 [2024-11-27 06:30:48.671417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.333 [2024-11-27 06:30:48.671445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-11-27 06:30:48.671451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.333 [2024-11-27 06:30:48.674383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.333 [2024-11-27 06:30:48.674410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-11-27 06:30:48.674416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.333 [2024-11-27 06:30:48.677378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.333 [2024-11-27 06:30:48.677405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-11-27 06:30:48.677410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.334 [2024-11-27 06:30:48.680392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.334 [2024-11-27 06:30:48.680419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.334 [2024-11-27 06:30:48.680425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.334 [2024-11-27 06:30:48.683640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.334 [2024-11-27 06:30:48.683667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.334 [2024-11-27 06:30:48.683672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.334 [2024-11-27 06:30:48.686595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.334 [2024-11-27 06:30:48.686621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.334 [2024-11-27 06:30:48.686627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.334 [2024-11-27 06:30:48.689772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.334 [2024-11-27 06:30:48.689800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.334 [2024-11-27 06:30:48.689810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.334 [2024-11-27 06:30:48.692873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.334 [2024-11-27 06:30:48.692986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.334 [2024-11-27 06:30:48.692993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.334 [2024-11-27 06:30:48.695952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.334 [2024-11-27 06:30:48.695975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.334 [2024-11-27 06:30:48.695981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.334 [2024-11-27 06:30:48.698924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.334 [2024-11-27 06:30:48.698948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.334 [2024-11-27 06:30:48.698954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.334 [2024-11-27 06:30:48.701885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.334 [2024-11-27 06:30:48.701925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.334 [2024-11-27 06:30:48.701932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.334 [2024-11-27 06:30:48.704821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.334 [2024-11-27 06:30:48.704926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.334 [2024-11-27 06:30:48.704934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.334 [2024-11-27 06:30:48.707602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.334 [2024-11-27 06:30:48.707627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.334 [2024-11-27 06:30:48.707633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.334 [2024-11-27 06:30:48.710550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.334 [2024-11-27 06:30:48.710576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.334 [2024-11-27 06:30:48.710581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.334 [2024-11-27 06:30:48.713436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.334 [2024-11-27 06:30:48.713461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.334 [2024-11-27 06:30:48.713466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.334 [2024-11-27 06:30:48.716313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.334 [2024-11-27 06:30:48.716339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.334 [2024-11-27 06:30:48.716344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.334 [2024-11-27 06:30:48.719044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.334 [2024-11-27 06:30:48.719069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.334 [2024-11-27 06:30:48.719074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.334 [2024-11-27 06:30:48.721943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.334 [2024-11-27 06:30:48.721967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.334 [2024-11-27 06:30:48.721972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.334 [2024-11-27 06:30:48.724783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.334 [2024-11-27 06:30:48.724807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.334 [2024-11-27 06:30:48.724813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.334 [2024-11-27 06:30:48.727766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e44e20) 00:28:29.334 [2024-11-27 06:30:48.727860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.334 [2024-11-27 06:30:48.727868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.334 10816.00 IOPS, 1352.00 MiB/s [2024-11-27T06:30:48.864Z] [2024-11-27 06:30:48.731842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0 00:28:29.334 Latency(us) 00:28:29.334 [2024-11-27T06:30:48.864Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.334 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:29.334 nvme0n1 : 2.00 10811.21 1351.40 0.00 0.00 1477.32 460.01 4259.84 00:28:29.334 [2024-11-27T06:30:48.864Z] =================================================================================================================== 00:28:29.334 [2024-11-27T06:30:48.864Z] Total : 10811.21 1351.40 0.00 0.00 1477.32 460.01 4259.84 00:28:29.334 x1e44e20) 00:28:29.334 [2024-11-27 06:30:48.731942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.334 [2024-11-27 06:30:48.731949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.334 { 00:28:29.334 "results": [ 00:28:29.334 { 00:28:29.334 "job": "nvme0n1", 00:28:29.334 "core_mask": "0x2", 00:28:29.334 "workload": "randread", 00:28:29.334 "status": "finished", 00:28:29.334 "queue_depth": 16, 00:28:29.334 "io_size": 131072, 00:28:29.334 "runtime": 2.002366, 00:28:29.334 "iops": 10811.210338169945, 00:28:29.334 "mibps": 1351.4012922712432, 00:28:29.334 "io_failed": 0, 00:28:29.334 "io_timeout": 0, 00:28:29.334 "avg_latency_us": 1477.324636989027, 00:28:29.334 "min_latency_us": 460.0123076923077, 00:28:29.334 "max_latency_us": 4259.84 00:28:29.334 } 00:28:29.334 ], 00:28:29.334 "core_count": 1 00:28:29.334 } 00:28:29.334 06:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:29.334 06:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:29.334 06:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:29.334 | .driver_specific 00:28:29.334 | .nvme_error 00:28:29.334 | .status_code 00:28:29.334 | .command_transient_transport_error' 00:28:29.334 06:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:29.592 06:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 699 > 0 )) 00:28:29.592 06:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 113577 00:28:29.592 06:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 113577 ']' 00:28:29.592 06:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 113577 00:28:29.592 06:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:29.593 06:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:29.593 06:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 113577 00:28:29.593 killing process with pid 113577 00:28:29.593 Received shutdown signal, test time was about 2.000000 seconds 00:28:29.593 00:28:29.593 Latency(us) 00:28:29.593 [2024-11-27T06:30:49.123Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.593 [2024-11-27T06:30:49.123Z] =================================================================================================================== 00:28:29.593 [2024-11-27T06:30:49.123Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:29.593 06:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:29.593 06:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:29.593 06:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 113577' 00:28:29.593 06:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 113577 00:28:29.593 06:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 113577 00:28:29.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:29.593 06:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:29.593 06:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:29.593 06:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:29.593 06:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:29.593 06:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:29.593 06:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=113663 00:28:29.593 06:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 113663 /var/tmp/bperf.sock 00:28:29.593 06:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 113663 ']' 00:28:29.593 06:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:29.593 06:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:29.593 06:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:29.593 06:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:29.593 06:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:29.593 06:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:29.865 [2024-11-27 06:30:49.127630] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:28:29.865 [2024-11-27 06:30:49.127800] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113663 ] 00:28:29.865 [2024-11-27 06:30:49.241414] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:29.865 [2024-11-27 06:30:49.268012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.865 [2024-11-27 06:30:49.286239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:30.806 06:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:30.806 06:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:30.806 06:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:30.806 06:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:30.806 06:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:30.806 06:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.806 06:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:30.806 06:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.806 06:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:30.806 06:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:31.064 nvme0n1 00:28:31.064 06:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:31.064 06:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.064 06:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:31.064 06:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.064 06:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:31.064 06:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:31.323 Running I/O for 2 seconds... 00:28:31.323 [2024-11-27 06:30:50.680498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef3a28 00:28:31.323 [2024-11-27 06:30:50.681382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.323 [2024-11-27 06:30:50.681410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:31.323 [2024-11-27 06:30:50.691112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee2c28 00:28:31.323 [2024-11-27 06:30:50.692537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.323 [2024-11-27 06:30:50.692561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:31.323 [2024-11-27 06:30:50.697372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee3498 00:28:31.323 [2024-11-27 06:30:50.698010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.323 [2024-11-27 06:30:50.698030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:31.323 [2024-11-27 06:30:50.707844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef4298 00:28:31.323 [2024-11-27 06:30:50.708982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.323 [2024-11-27 06:30:50.709001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:31.323 [2024-11-27 06:30:50.715963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef9f68 00:28:31.323 [2024-11-27 06:30:50.716796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.323 [2024-11-27 06:30:50.716819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:31.324 [2024-11-27 06:30:50.724392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef1868 00:28:31.324 [2024-11-27 06:30:50.725306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.324 [2024-11-27 06:30:50.725327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:31.324 [2024-11-27 06:30:50.734653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee0a68 00:28:31.324 [2024-11-27 06:30:50.736121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.324 [2024-11-27 06:30:50.736140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:31.324 [2024-11-27 06:30:50.740841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee5658 00:28:31.324 [2024-11-27 06:30:50.741554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.324 [2024-11-27 06:30:50.741575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:31.324 [2024-11-27 06:30:50.751417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef6458 00:28:31.324 [2024-11-27 06:30:50.752609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.324 [2024-11-27 06:30:50.752631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:31.324 [2024-11-27 06:30:50.759606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef7da8 00:28:31.324 [2024-11-27 06:30:50.760541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.324 [2024-11-27 06:30:50.760564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:31.324 [2024-11-27 06:30:50.768025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eef6a8 00:28:31.324 [2024-11-27 06:30:50.768992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.324 [2024-11-27 06:30:50.769012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:31.324 [2024-11-27 06:30:50.776583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee7818 00:28:31.324 [2024-11-27 06:30:50.777200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.324 [2024-11-27 06:30:50.777221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:31.324 [2024-11-27 06:30:50.784755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee9e10 00:28:31.324 [2024-11-27 06:30:50.785302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.324 [2024-11-27 06:30:50.785324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:31.324 [2024-11-27 06:30:50.794753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee84c0 00:28:31.324 [2024-11-27 06:30:50.795850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.324 [2024-11-27 06:30:50.796708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:31.324 [2024-11-27 06:30:50.802968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef7538 00:28:31.324 [2024-11-27 06:30:50.803878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.324 [2024-11-27 06:30:50.803982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:31.324 [2024-11-27 06:30:50.811317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eecc78 00:28:31.324 [2024-11-27 06:30:50.812294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.324 [2024-11-27 06:30:50.812316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:31.324 [2024-11-27 06:30:50.819993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee7818 00:28:31.324 [2024-11-27 06:30:50.820978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.324 [2024-11-27 06:30:50.821000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:31.324 [2024-11-27 06:30:50.828131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016efb048 00:28:31.324 [2024-11-27 06:30:50.829008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.324 [2024-11-27 06:30:50.829028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:31.324 [2024-11-27 06:30:50.836242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee23b8 00:28:31.324 [2024-11-27 06:30:50.836994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.324 [2024-11-27 06:30:50.837015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:31.324 [2024-11-27 06:30:50.844477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016efef90 00:28:31.324 [2024-11-27 06:30:50.845151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.324 [2024-11-27 06:30:50.845172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:31.586 [2024-11-27 06:30:50.852742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef0ff8 00:28:31.586 [2024-11-27 06:30:50.853277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.586 [2024-11-27 06:30:50.853298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:31.586 [2024-11-27 06:30:50.863175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee2c28 00:28:31.586 [2024-11-27 06:30:50.863816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.586 [2024-11-27 06:30:50.863915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:31.586 [2024-11-27 06:30:50.871350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef0788 00:28:31.586 [2024-11-27 06:30:50.871997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.586 [2024-11-27 06:30:50.872083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:31.586 [2024-11-27 06:30:50.879666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef7538 00:28:31.586 [2024-11-27 06:30:50.880165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.586 [2024-11-27 06:30:50.880255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:31.586 [2024-11-27 06:30:50.889789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee1710 00:28:31.586 [2024-11-27 06:30:50.890893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.586 [2024-11-27 06:30:50.890994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:31.586 [2024-11-27 06:30:50.898250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016efd640 00:28:31.586 [2024-11-27 06:30:50.899250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.586 [2024-11-27 06:30:50.899335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:31.586 [2024-11-27 06:30:50.906645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eee190 00:28:31.586 [2024-11-27 06:30:50.907514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.586 [2024-11-27 06:30:50.907600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:31.586 [2024-11-27 06:30:50.915108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee6738 00:28:31.586 [2024-11-27 06:30:50.915861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.586 [2024-11-27 06:30:50.915960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:31.586 [2024-11-27 06:30:50.923517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee7818 00:28:31.586 [2024-11-27 06:30:50.924138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.586 [2024-11-27 06:30:50.924221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:31.586 [2024-11-27 06:30:50.935122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef7100 00:28:31.586 [2024-11-27 06:30:50.936862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.586 [2024-11-27 06:30:50.936973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:31.586 [2024-11-27 06:30:50.942045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee1b48 00:28:31.587 [2024-11-27 06:30:50.942835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.587 [2024-11-27 06:30:50.942936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:31.587 [2024-11-27 06:30:50.953089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee38d0 00:28:31.587 [2024-11-27 06:30:50.954359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.587 [2024-11-27 06:30:50.954450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:31.587 [2024-11-27 06:30:50.961988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef5be8 00:28:31.587 [2024-11-27 06:30:50.963213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.587 [2024-11-27 06:30:50.963300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.587 [2024-11-27 06:30:50.968407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016efeb58 00:28:31.587 [2024-11-27 06:30:50.969062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.587 [2024-11-27 06:30:50.969150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.587 [2024-11-27 06:30:50.978897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef6cc8 00:28:31.587 [2024-11-27 06:30:50.980065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.587 [2024-11-27 06:30:50.980150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:31.587 [2024-11-27 06:30:50.987824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eeea00 00:28:31.587 [2024-11-27 06:30:50.988622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.587 [2024-11-27 06:30:50.988710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:31.587 [2024-11-27 06:30:50.996324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef6890 00:28:31.587 [2024-11-27 06:30:50.997053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.587 [2024-11-27 06:30:50.997141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.587 [2024-11-27 06:30:51.006610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee84c0 00:28:31.587 [2024-11-27 06:30:51.008142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.587 [2024-11-27 06:30:51.008243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:31.587 [2024-11-27 06:30:51.013285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef1ca0 00:28:31.587 [2024-11-27 06:30:51.014085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.587 [2024-11-27 06:30:51.014176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:31.587 [2024-11-27 06:30:51.023805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eedd58 00:28:31.587 [2024-11-27 06:30:51.025097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.587 [2024-11-27 06:30:51.025192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:31.587 [2024-11-27 06:30:51.032558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef6020 00:28:31.587 [2024-11-27 06:30:51.033849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.587 [2024-11-27 06:30:51.033943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:31.587 [2024-11-27 06:30:51.039151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef0ff8 00:28:31.587 [2024-11-27 06:30:51.039853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.587 [2024-11-27 06:30:51.039946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:31.587 [2024-11-27 06:30:51.049937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee3060 00:28:31.587 [2024-11-27 06:30:51.051173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.587 [2024-11-27 06:30:51.051258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:31.587 [2024-11-27 06:30:51.058339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef8a50 00:28:31.587 [2024-11-27 06:30:51.059245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.587 [2024-11-27 06:30:51.059336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:31.587 [2024-11-27 06:30:51.066983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee5220 00:28:31.587 [2024-11-27 06:30:51.067950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.587 [2024-11-27 06:30:51.067971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:31.587 [2024-11-27 06:30:51.077633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eff3c8 00:28:31.587 [2024-11-27 06:30:51.079121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.587 [2024-11-27 06:30:51.079141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:31.587 [2024-11-27 06:30:51.083977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef81e0 00:28:31.587 [2024-11-27 06:30:51.084740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.587 [2024-11-27 06:30:51.084763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:31.587 [2024-11-27 06:30:51.094575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee3498 00:28:31.587 [2024-11-27 06:30:51.095651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.587 [2024-11-27 06:30:51.095674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:31.587 [2024-11-27 06:30:51.102949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016efa7d8 00:28:31.587 [2024-11-27 06:30:51.103955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.587 [2024-11-27 06:30:51.103978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:31.587 [2024-11-27 06:30:51.111753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee7c50 00:28:31.587 [2024-11-27 06:30:51.112823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.587 [2024-11-27 06:30:51.112845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:31.850 [2024-11-27 06:30:51.119962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee1b48 00:28:31.850 [2024-11-27 06:30:51.120754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.850 [2024-11-27 06:30:51.120777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:31.850 [2024-11-27 06:30:51.128572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eedd58 00:28:31.850 [2024-11-27 06:30:51.129423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.850 [2024-11-27 06:30:51.129443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:31.850 [2024-11-27 06:30:51.139007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef0788 00:28:31.850 [2024-11-27 06:30:51.140312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.850 [2024-11-27 06:30:51.140333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:31.850 [2024-11-27 06:30:51.145104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee0ea0 00:28:31.850 [2024-11-27 06:30:51.145703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.850 [2024-11-27 06:30:51.145724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:31.850 [2024-11-27 06:30:51.155321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee6738 00:28:31.850 [2024-11-27 06:30:51.156293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.850 [2024-11-27 06:30:51.156314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:31.850 [2024-11-27 06:30:51.163439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef8a50 00:28:31.850 [2024-11-27 06:30:51.164285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.850 [2024-11-27 06:30:51.164306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:31.850 [2024-11-27 06:30:51.171574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eecc78 00:28:31.850 [2024-11-27 06:30:51.172320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.850 [2024-11-27 06:30:51.172341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:31.850 [2024-11-27 06:30:51.179688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eea248 00:28:31.850 [2024-11-27 06:30:51.180308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.850 [2024-11-27 06:30:51.180329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:31.850 [2024-11-27 06:30:51.187829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee88f8 00:28:31.850 [2024-11-27 06:30:51.188352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.850 [2024-11-27 06:30:51.188373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:31.850 [2024-11-27 06:30:51.198605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee9e10 00:28:31.850 [2024-11-27 06:30:51.199914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.850 [2024-11-27 06:30:51.199932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:31.850 [2024-11-27 06:30:51.206417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef57b0 00:28:31.850 [2024-11-27 06:30:51.207354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.850 [2024-11-27 06:30:51.207377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:31.850 [2024-11-27 06:30:51.214761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016edf550 00:28:31.850 [2024-11-27 06:30:51.215765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.850 [2024-11-27 06:30:51.215789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:31.850 [2024-11-27 06:30:51.223179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eebb98 00:28:31.850 [2024-11-27 06:30:51.224080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.850 [2024-11-27 06:30:51.224105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:31.850 [2024-11-27 06:30:51.231925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eecc78 00:28:31.850 [2024-11-27 06:30:51.232550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.850 [2024-11-27 06:30:51.232576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:31.850 [2024-11-27 06:30:51.240097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ede8a8 00:28:31.850 [2024-11-27 06:30:51.240640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.850 [2024-11-27 06:30:51.240664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:31.850 [2024-11-27 06:30:51.248210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eea248 00:28:31.850 [2024-11-27 06:30:51.248606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.850 [2024-11-27 06:30:51.248631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:31.850 [2024-11-27 06:30:51.257967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef4298 00:28:31.850 [2024-11-27 06:30:51.258958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.850 [2024-11-27 06:30:51.258979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:31.850 [2024-11-27 06:30:51.266105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee2c28 00:28:31.850 [2024-11-27 06:30:51.266995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.850 [2024-11-27 06:30:51.267016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:31.850 [2024-11-27 06:30:51.274199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eebb98 00:28:31.850 [2024-11-27 06:30:51.274965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.850 [2024-11-27 06:30:51.274985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:31.850 [2024-11-27 06:30:51.282751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee5ec8 00:28:31.850 [2024-11-27 06:30:51.283287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.850 [2024-11-27 06:30:51.283309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:31.850 [2024-11-27 06:30:51.291273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef3a28 00:28:31.851 [2024-11-27 06:30:51.292046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.851 [2024-11-27 06:30:51.292066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:31.851 [2024-11-27 06:30:51.299389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eeea00 00:28:31.851 [2024-11-27 06:30:51.300044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.851 [2024-11-27 06:30:51.300065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:31.851 [2024-11-27 06:30:51.307896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eef270 00:28:31.851 [2024-11-27 06:30:51.308664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.851 [2024-11-27 06:30:51.308684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:31.851 [2024-11-27 06:30:51.318167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee4578 00:28:31.851 [2024-11-27 06:30:51.319419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.851 [2024-11-27 06:30:51.319439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:31.851 [2024-11-27 06:30:51.324233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee88f8 00:28:31.851 [2024-11-27 06:30:51.324792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.851 [2024-11-27 06:30:51.324812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:31.851 [2024-11-27 06:30:51.334478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee3d08 00:28:31.851 [2024-11-27 06:30:51.335516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.851 [2024-11-27 06:30:51.335537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.851 [2024-11-27 06:30:51.342440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef57b0 00:28:31.851 [2024-11-27 06:30:51.343219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.851 [2024-11-27 06:30:51.343241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.851 [2024-11-27 06:30:51.350798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee4140 00:28:31.851 [2024-11-27 06:30:51.351627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.851 [2024-11-27 06:30:51.351648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:31.851 [2024-11-27 06:30:51.361054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef9b30 00:28:31.851 [2024-11-27 06:30:51.362435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.851 [2024-11-27 06:30:51.362531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:31.851 [2024-11-27 06:30:51.367288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eee190 00:28:31.851 [2024-11-27 06:30:51.367971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.851 [2024-11-27 06:30:51.368043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:31.851 [2024-11-27 06:30:51.377686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef6cc8 00:28:32.113 [2024-11-27 06:30:51.378854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.113 [2024-11-27 06:30:51.378954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:32.113 [2024-11-27 06:30:51.385913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee0630 00:28:32.113 [2024-11-27 06:30:51.386725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.113 [2024-11-27 06:30:51.386815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:32.113 [2024-11-27 06:30:51.394349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee3498 00:28:32.113 [2024-11-27 06:30:51.395304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.113 [2024-11-27 06:30:51.395388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:32.113 [2024-11-27 06:30:51.404778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee0ea0 00:28:32.113 [2024-11-27 06:30:51.406220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.113 [2024-11-27 06:30:51.406303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:32.113 [2024-11-27 06:30:51.411040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eed0b0 00:28:32.113 [2024-11-27 06:30:51.411775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.113 [2024-11-27 06:30:51.411858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:32.113 [2024-11-27 06:30:51.421453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016edfdc0 00:28:32.113 [2024-11-27 06:30:51.422682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.113 [2024-11-27 06:30:51.422765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:32.113 [2024-11-27 06:30:51.430204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016edece0 00:28:32.113 [2024-11-27 06:30:51.431429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.113 [2024-11-27 06:30:51.431512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.113 [2024-11-27 06:30:51.436608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eeff18 00:28:32.113 [2024-11-27 06:30:51.437258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.113 [2024-11-27 06:30:51.437340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:32.113 [2024-11-27 06:30:51.447061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016efa3a0 00:28:32.113 [2024-11-27 06:30:51.448191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.113 [2024-11-27 06:30:51.448274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:32.113 [2024-11-27 06:30:51.455194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eff3c8 00:28:32.113 [2024-11-27 06:30:51.456011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.113 [2024-11-27 06:30:51.456096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.113 [2024-11-27 06:30:51.464360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee0630 00:28:32.113 [2024-11-27 06:30:51.465141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.113 [2024-11-27 06:30:51.465225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:32.113 [2024-11-27 06:30:51.472609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eee190 00:28:32.113 [2024-11-27 06:30:51.473310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.113 [2024-11-27 06:30:51.473395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:32.113 [2024-11-27 06:30:51.480822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016efeb58 00:28:32.113 [2024-11-27 06:30:51.481369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.113 [2024-11-27 06:30:51.481456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:32.113 [2024-11-27 06:30:51.490644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef7538 00:28:32.113 [2024-11-27 06:30:51.491780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.113 [2024-11-27 06:30:51.491865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:32.113 [2024-11-27 06:30:51.498860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016efdeb0 00:28:32.113 [2024-11-27 06:30:51.499900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.113 [2024-11-27 06:30:51.499991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:32.113 [2024-11-27 06:30:51.507067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eeea00 00:28:32.113 [2024-11-27 06:30:51.507973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.113 [2024-11-27 06:30:51.508057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:32.113 [2024-11-27 06:30:51.515275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef2d80 00:28:32.113 [2024-11-27 06:30:51.516080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.113 [2024-11-27 06:30:51.516163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:32.113 [2024-11-27 06:30:51.523460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016efac10 00:28:32.113 [2024-11-27 06:30:51.524139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.113 [2024-11-27 06:30:51.524223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:32.113 [2024-11-27 06:30:51.531646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee9168 00:28:32.113 [2024-11-27 06:30:51.532224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.113 [2024-11-27 06:30:51.532308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:32.113 [2024-11-27 06:30:51.542473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eef6a8 00:28:32.113 [2024-11-27 06:30:51.543760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.113 [2024-11-27 06:30:51.543835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:32.113 [2024-11-27 06:30:51.550664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee88f8 00:28:32.113 [2024-11-27 06:30:51.551766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.113 [2024-11-27 06:30:51.551788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:32.113 [2024-11-27 06:30:51.557545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee23b8 00:28:32.113 [2024-11-27 06:30:51.558240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.113 [2024-11-27 06:30:51.558257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:32.113 [2024-11-27 06:30:51.565989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016efe720 00:28:32.113 [2024-11-27 06:30:51.566608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.113 [2024-11-27 06:30:51.566683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:32.113 [2024-11-27 06:30:51.574660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016efc560 00:28:32.113 [2024-11-27 06:30:51.575281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.113 [2024-11-27 06:30:51.575302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:32.113 [2024-11-27 06:30:51.582809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eedd58 00:28:32.113 [2024-11-27 06:30:51.583334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.113 [2024-11-27 06:30:51.583354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:32.113 [2024-11-27 06:30:51.592917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef7970 00:28:32.113 [2024-11-27 06:30:51.593570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.113 [2024-11-27 06:30:51.593649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:32.114 [2024-11-27 06:30:51.601046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee4140 00:28:32.114 [2024-11-27 06:30:51.601555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.114 [2024-11-27 06:30:51.601578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:32.114 [2024-11-27 06:30:51.609170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee8d30 00:28:32.114 [2024-11-27 06:30:51.609597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.114 [2024-11-27 06:30:51.609620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:32.114 [2024-11-27 06:30:51.618984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eee190 00:28:32.114 [2024-11-27 06:30:51.620017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.114 [2024-11-27 06:30:51.620096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:32.114 [2024-11-27 06:30:51.627331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee38d0 00:28:32.114 [2024-11-27 06:30:51.628207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.114 [2024-11-27 06:30:51.628228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:32.114 [2024-11-27 06:30:51.635461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef92c0 00:28:32.114 [2024-11-27 06:30:51.636237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.114 [2024-11-27 06:30:51.636258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:32.375 [2024-11-27 06:30:51.643583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef6458 00:28:32.375 [2024-11-27 06:30:51.644234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.375 [2024-11-27 06:30:51.644254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:32.375 [2024-11-27 06:30:51.653811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef4298 00:28:32.375 [2024-11-27 06:30:51.655083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.375 [2024-11-27 06:30:51.655160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:32.375 [2024-11-27 06:30:51.659970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee4de8 00:28:32.376 [2024-11-27 06:30:51.660579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.376 [2024-11-27 06:30:51.660597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:32.376 29100.00 IOPS, 113.67 MiB/s [2024-11-27T06:30:51.906Z] [2024-11-27 06:30:51.670278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eeff18 00:28:32.376 [2024-11-27 06:30:51.671317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.376 [2024-11-27 06:30:51.671339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.376 [2024-11-27 06:30:51.678241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef2d80 00:28:32.376 [2024-11-27 06:30:51.679017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.376 [2024-11-27 06:30:51.679039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.376 [2024-11-27 06:30:51.686594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eed920 00:28:32.376 [2024-11-27 06:30:51.687429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.376 [2024-11-27 06:30:51.687449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:32.376 [2024-11-27 06:30:51.696838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef35f0 00:28:32.376 [2024-11-27 06:30:51.698154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.376 [2024-11-27 06:30:51.698174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:32.376 [2024-11-27 06:30:51.702902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef20d8 00:28:32.376 [2024-11-27 06:30:51.703514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.376 [2024-11-27 06:30:51.703535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:32.376 [2024-11-27 06:30:51.713147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef92c0 00:28:32.376 [2024-11-27 06:30:51.714248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.376 [2024-11-27 06:30:51.714269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:32.376 [2024-11-27 06:30:51.721104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016efb048 00:28:32.376 [2024-11-27 06:30:51.721949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.376 [2024-11-27 06:30:51.721969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:32.376 [2024-11-27 06:30:51.729460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016efd208 00:28:32.376 [2024-11-27 06:30:51.730349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.376 [2024-11-27 06:30:51.730368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:32.376 [2024-11-27 06:30:51.738130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee1710 00:28:32.376 [2024-11-27 06:30:51.739082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.376 [2024-11-27 06:30:51.739100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:32.376 [2024-11-27 06:30:51.746352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef8e88 00:28:32.376 [2024-11-27 06:30:51.747134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.376 [2024-11-27 06:30:51.747155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:32.376 [2024-11-27 06:30:51.754451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef7970 00:28:32.376 [2024-11-27 06:30:51.755108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.376 [2024-11-27 06:30:51.755128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:32.376 [2024-11-27 06:30:51.762602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef4f40 00:28:32.376 [2024-11-27 06:30:51.763158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.376 [2024-11-27 06:30:51.763174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:32.376 [2024-11-27 06:30:51.772990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eeea00 00:28:32.376 [2024-11-27 06:30:51.773762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.376 [2024-11-27 06:30:51.773782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:32.376 [2024-11-27 06:30:51.781442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef35f0 00:28:32.376 [2024-11-27 06:30:51.782010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.376 [2024-11-27 06:30:51.782031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:32.376 [2024-11-27 06:30:51.789730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eee5c8 00:28:32.376 [2024-11-27 06:30:51.790203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.376 [2024-11-27 06:30:51.790225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:32.376 [2024-11-27 06:30:51.799493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016efc998 00:28:32.376 [2024-11-27 06:30:51.800531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.376 [2024-11-27 06:30:51.800613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:32.376 [2024-11-27 06:30:51.807601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016efc560 00:28:32.376 [2024-11-27 06:30:51.808512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.376 [2024-11-27 06:30:51.808533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.376 [2024-11-27 06:30:51.815727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef46d0 00:28:32.376 [2024-11-27 06:30:51.816535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:8416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.376 [2024-11-27 06:30:51.816556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:32.376 [2024-11-27 06:30:51.823852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016edf550 00:28:32.376 [2024-11-27 06:30:51.824542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.376 [2024-11-27 06:30:51.824562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:32.376 [2024-11-27 06:30:51.832008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eee190 00:28:32.376 [2024-11-27 06:30:51.832647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.376 [2024-11-27 06:30:51.832664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:32.376 [2024-11-27 06:30:51.842820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef6458 00:28:32.376 [2024-11-27 06:30:51.844140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.376 [2024-11-27 06:30:51.844160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:32.376 [2024-11-27 06:30:51.850943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016efc128 00:28:32.376 [2024-11-27 06:30:51.852114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.376 [2024-11-27 06:30:51.852134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:32.376 [2024-11-27 06:30:51.857810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef8e88 00:28:32.376 [2024-11-27 06:30:51.858523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.376 [2024-11-27 06:30:51.858543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:32.376 [2024-11-27 06:30:51.865963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee0ea0 00:28:32.376 [2024-11-27 06:30:51.866545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.376 [2024-11-27 06:30:51.866565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:32.376 [2024-11-27 06:30:51.876044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee6b70 00:28:32.376 [2024-11-27 06:30:51.876776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.376 [2024-11-27 06:30:51.876798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:32.376 [2024-11-27 06:30:51.884160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016efc128 00:28:32.376 [2024-11-27 06:30:51.884765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.376 [2024-11-27 06:30:51.884789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.377 [2024-11-27 06:30:51.892633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016edf118 00:28:32.377 [2024-11-27 06:30:51.893467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.377 [2024-11-27 06:30:51.893488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:32.377 [2024-11-27 06:30:51.900770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef9f68 00:28:32.377 [2024-11-27 06:30:51.901492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.377 [2024-11-27 06:30:51.901513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:32.638 [2024-11-27 06:30:51.908886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef0788 00:28:32.638 [2024-11-27 06:30:51.909488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.638 [2024-11-27 06:30:51.909509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:32.638 [2024-11-27 06:30:51.917410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee49b0 00:28:32.638 [2024-11-27 06:30:51.918188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.638 [2024-11-27 06:30:51.918206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:32.638 [2024-11-27 06:30:51.927735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef9f68 00:28:32.638 [2024-11-27 06:30:51.928943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.638 [2024-11-27 06:30:51.928963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:32.638 [2024-11-27 06:30:51.935697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016efa7d8 00:28:32.638 [2024-11-27 06:30:51.936639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.638 [2024-11-27 06:30:51.936662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:32.638 [2024-11-27 06:30:51.944075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee5ec8 00:28:32.638 [2024-11-27 06:30:51.945137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.638 [2024-11-27 06:30:51.945157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:32.638 [2024-11-27 06:30:51.952830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee95a0 00:28:32.638 [2024-11-27 06:30:51.953812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.638 [2024-11-27 06:30:51.953833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:32.638 [2024-11-27 06:30:51.959679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee38d0 00:28:32.638 [2024-11-27 06:30:51.960256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.638 [2024-11-27 06:30:51.960277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:32.638 [2024-11-27 06:30:51.969855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee9e10 00:28:32.638 [2024-11-27 06:30:51.970500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.638 [2024-11-27 06:30:51.970522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:32.638 [2024-11-27 06:30:51.977997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee38d0 00:28:32.638 [2024-11-27 06:30:51.978589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.638 [2024-11-27 06:30:51.978608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:32.638 [2024-11-27 06:30:51.986350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee5658 00:28:32.638 [2024-11-27 06:30:51.987045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.638 [2024-11-27 06:30:51.987128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:32.638 [2024-11-27 06:30:51.994693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eeb760 00:28:32.638 [2024-11-27 06:30:51.995459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.638 [2024-11-27 06:30:51.995480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:32.638 [2024-11-27 06:30:52.003392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef6cc8 00:28:32.639 [2024-11-27 06:30:52.004155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.639 [2024-11-27 06:30:52.004176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:32.639 [2024-11-27 06:30:52.012549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee3060 00:28:32.639 [2024-11-27 06:30:52.013427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.639 [2024-11-27 06:30:52.013448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:32.639 [2024-11-27 06:30:52.020684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee0630 00:28:32.639 [2024-11-27 06:30:52.021457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.639 [2024-11-27 06:30:52.021477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:32.639 [2024-11-27 06:30:52.028817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee9168 00:28:32.639 [2024-11-27 06:30:52.029463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.639 [2024-11-27 06:30:52.029483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:32.639 [2024-11-27 06:30:52.036976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eed0b0 00:28:32.639 [2024-11-27 06:30:52.037579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.639 [2024-11-27 06:30:52.037596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:32.639 [2024-11-27 06:30:52.046735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee9168 00:28:32.639 [2024-11-27 06:30:52.047650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.639 [2024-11-27 06:30:52.047671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:32.639 [2024-11-27 06:30:52.055068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016edf550 00:28:32.639 [2024-11-27 06:30:52.055830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.639 [2024-11-27 06:30:52.055850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:32.639 [2024-11-27 06:30:52.064048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef8e88 00:28:32.639 [2024-11-27 06:30:52.064639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.639 [2024-11-27 06:30:52.064658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:32.639 [2024-11-27 06:30:52.072425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef8618 00:28:32.639 [2024-11-27 06:30:52.073168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.639 [2024-11-27 06:30:52.073189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:32.639 [2024-11-27 06:30:52.080795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef8e88 00:28:32.639 [2024-11-27 06:30:52.081461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.639 [2024-11-27 06:30:52.081481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:32.639 [2024-11-27 06:30:52.088928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee5ec8 00:28:32.639 [2024-11-27 06:30:52.089452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.639 [2024-11-27 06:30:52.089471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:32.639 [2024-11-27 06:30:52.099673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee6300 00:28:32.639 [2024-11-27 06:30:52.100933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.639 [2024-11-27 06:30:52.100955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:32.639 [2024-11-27 06:30:52.107846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eedd58 00:28:32.639 [2024-11-27 06:30:52.109027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.639 [2024-11-27 06:30:52.109049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:32.639 [2024-11-27 06:30:52.116126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef2948 00:28:32.639 [2024-11-27 06:30:52.117159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.639 [2024-11-27 06:30:52.117180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:32.639 [2024-11-27 06:30:52.124447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee2c28 00:28:32.639 [2024-11-27 06:30:52.125379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.639 [2024-11-27 06:30:52.125399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:32.639 [2024-11-27 06:30:52.134493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef0bc0 00:28:32.639 [2024-11-27 06:30:52.135865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.639 [2024-11-27 06:30:52.135887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:32.639 [2024-11-27 06:30:52.140568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eebfd0 00:28:32.639 [2024-11-27 06:30:52.141255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.639 [2024-11-27 06:30:52.141276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:32.639 [2024-11-27 06:30:52.150816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef1ca0 00:28:32.639 [2024-11-27 06:30:52.151856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.639 [2024-11-27 06:30:52.151877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:32.639 [2024-11-27 06:30:52.158956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee27f0 00:28:32.639 [2024-11-27 06:30:52.159863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.639 [2024-11-27 06:30:52.159885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.639 [2024-11-27 06:30:52.167107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016efa3a0 00:28:32.900 [2024-11-27 06:30:52.167915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.900 [2024-11-27 06:30:52.167936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:32.900 [2024-11-27 06:30:52.177313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee4de8 00:28:32.900 [2024-11-27 06:30:52.178705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.900 [2024-11-27 06:30:52.178726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:32.900 [2024-11-27 06:30:52.183387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef5378 00:28:32.901 [2024-11-27 06:30:52.184083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.901 [2024-11-27 06:30:52.184105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:32.901 [2024-11-27 06:30:52.193651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016efdeb0 00:28:32.901 [2024-11-27 06:30:52.194832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.901 [2024-11-27 06:30:52.194854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:32.901 [2024-11-27 06:30:52.201240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee84c0 00:28:32.901 [2024-11-27 06:30:52.202551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.901 [2024-11-27 06:30:52.202575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:32.901 [2024-11-27 06:30:52.210762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee0630 00:28:32.901 [2024-11-27 06:30:52.211833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.901 [2024-11-27 06:30:52.211856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:32.901 [2024-11-27 06:30:52.218898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee5658 00:28:32.901 [2024-11-27 06:30:52.219850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.901 [2024-11-27 06:30:52.219872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:32.901 [2024-11-27 06:30:52.227048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee9e10 00:28:32.901 [2024-11-27 06:30:52.227882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.901 [2024-11-27 06:30:52.227910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:32.901 [2024-11-27 06:30:52.236045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eec840 00:28:32.901 [2024-11-27 06:30:52.237138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.901 [2024-11-27 06:30:52.237164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:32.901 [2024-11-27 06:30:52.244210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee0630 00:28:32.901 [2024-11-27 06:30:52.245123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.901 [2024-11-27 06:30:52.245149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:32.901 [2024-11-27 06:30:52.252819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eec840 00:28:32.901 [2024-11-27 06:30:52.253695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.901 [2024-11-27 06:30:52.253719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:32.901 [2024-11-27 06:30:52.263101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eed920 00:28:32.901 [2024-11-27 06:30:52.264448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.901 [2024-11-27 06:30:52.264473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:32.901 [2024-11-27 06:30:52.269191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef2510 00:28:32.901 [2024-11-27 06:30:52.269837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.901 [2024-11-27 06:30:52.269860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:32.901 [2024-11-27 06:30:52.279448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef6020 00:28:32.901 [2024-11-27 06:30:52.280586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.901 [2024-11-27 06:30:52.280608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:32.901 [2024-11-27 06:30:52.288124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016efc560 00:28:32.901 [2024-11-27 06:30:52.289268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.901 [2024-11-27 06:30:52.289290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:32.901 [2024-11-27 06:30:52.296268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016edf118 00:28:32.901 [2024-11-27 06:30:52.297316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.901 [2024-11-27 06:30:52.297339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:32.901 [2024-11-27 06:30:52.304381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef2d80 00:28:32.901 [2024-11-27 06:30:52.305298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.901 [2024-11-27 06:30:52.305321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:32.901 [2024-11-27 06:30:52.314063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee4140 00:28:32.901 [2024-11-27 06:30:52.315331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.901 [2024-11-27 06:30:52.315354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:32.901 [2024-11-27 06:30:52.320128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee6b70 00:28:32.901 [2024-11-27 06:30:52.320692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.901 [2024-11-27 06:30:52.320714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.901 [2024-11-27 06:30:52.330396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef1ca0 00:28:32.901 [2024-11-27 06:30:52.331456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.901 [2024-11-27 06:30:52.331478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:32.901 [2024-11-27 06:30:52.339081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef3a28 00:28:32.901 [2024-11-27 06:30:52.340138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.901 [2024-11-27 06:30:52.340160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:32.901 [2024-11-27 06:30:52.345950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016efcdd0 00:28:32.901 [2024-11-27 06:30:52.346531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.901 [2024-11-27 06:30:52.346553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:32.901 [2024-11-27 06:30:52.356042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016edf550 00:28:32.901 [2024-11-27 06:30:52.356762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.901 [2024-11-27 06:30:52.356785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:32.901 [2024-11-27 06:30:52.364176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee23b8 00:28:32.901 [2024-11-27 06:30:52.364787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.901 [2024-11-27 06:30:52.364810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:32.901 [2024-11-27 06:30:52.372317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee0ea0 00:28:32.901 [2024-11-27 06:30:52.372803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.901 [2024-11-27 06:30:52.372826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:32.901 [2024-11-27 06:30:52.382078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef9f68 00:28:32.901 [2024-11-27 06:30:52.383146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.901 [2024-11-27 06:30:52.383169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:32.901 [2024-11-27 06:30:52.390229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016efef90 00:28:32.901 [2024-11-27 06:30:52.391188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.901 [2024-11-27 06:30:52.391210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:32.901 [2024-11-27 06:30:52.398351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee8d30 00:28:32.901 [2024-11-27 06:30:52.399192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.901 [2024-11-27 06:30:52.399214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:32.901 [2024-11-27 06:30:52.406838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016efc128 00:28:32.901 [2024-11-27 06:30:52.407792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.901 [2024-11-27 06:30:52.407814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:32.901 [2024-11-27 06:30:52.417097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef4298 00:28:32.901 [2024-11-27 06:30:52.418535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.901 [2024-11-27 06:30:52.418557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:32.901 [2024-11-27 06:30:52.423167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eed4e8 00:28:32.901 [2024-11-27 06:30:52.423901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.901 [2024-11-27 06:30:52.423929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:33.178 [2024-11-27 06:30:52.433416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eeea00 00:28:33.178 [2024-11-27 06:30:52.434640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.178 [2024-11-27 06:30:52.434663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:33.178 [2024-11-27 06:30:52.439487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee23b8 00:28:33.178 [2024-11-27 06:30:52.440014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.178 [2024-11-27 06:30:52.440036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:33.178 [2024-11-27 06:30:52.449744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eeaab8 00:28:33.178 [2024-11-27 06:30:52.450757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.178 [2024-11-27 06:30:52.450778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:33.178 [2024-11-27 06:30:52.457715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eff3c8 00:28:33.178 [2024-11-27 06:30:52.458476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.178 [2024-11-27 06:30:52.458499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:33.178 [2024-11-27 06:30:52.466086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee4578 00:28:33.178 [2024-11-27 06:30:52.466877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.178 [2024-11-27 06:30:52.466899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:33.178 [2024-11-27 06:30:52.476323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eea680 00:28:33.178 [2024-11-27 06:30:52.477618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.178 [2024-11-27 06:30:52.477641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:33.178 [2024-11-27 06:30:52.482403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eec840 00:28:33.178 [2024-11-27 06:30:52.482990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.178 [2024-11-27 06:30:52.483011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:33.178 [2024-11-27 06:30:52.492650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016efb048 00:28:33.178 [2024-11-27 06:30:52.493726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.178 [2024-11-27 06:30:52.493747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:33.178 [2024-11-27 06:30:52.500624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee84c0 00:28:33.178 [2024-11-27 06:30:52.501454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.178 [2024-11-27 06:30:52.501479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:33.178 [2024-11-27 06:30:52.508998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef2948 00:28:33.178 [2024-11-27 06:30:52.509848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.178 [2024-11-27 06:30:52.509870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:33.178 [2024-11-27 06:30:52.519242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee7818 00:28:33.178 [2024-11-27 06:30:52.520584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.178 [2024-11-27 06:30:52.520606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:33.178 [2024-11-27 06:30:52.525318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef6458 00:28:33.178 [2024-11-27 06:30:52.525962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.178 [2024-11-27 06:30:52.525985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:33.178 [2024-11-27 06:30:52.535563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee3498 00:28:33.178 [2024-11-27 06:30:52.536691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.178 [2024-11-27 06:30:52.536713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:33.179 [2024-11-27 06:30:52.543531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee4de8 00:28:33.179 [2024-11-27 06:30:52.544413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.179 [2024-11-27 06:30:52.544438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:33.179 [2024-11-27 06:30:52.551916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee88f8 00:28:33.179 [2024-11-27 06:30:52.552830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.179 [2024-11-27 06:30:52.552852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:33.179 [2024-11-27 06:30:52.562165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef96f8 00:28:33.179 [2024-11-27 06:30:52.563566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.179 [2024-11-27 06:30:52.563588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:33.179 [2024-11-27 06:30:52.568235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef8618 00:28:33.179 [2024-11-27 06:30:52.568944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.179 [2024-11-27 06:30:52.568979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:33.179 [2024-11-27 06:30:52.578507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef4298 00:28:33.179 [2024-11-27 06:30:52.579694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.179 [2024-11-27 06:30:52.579715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:33.179 [2024-11-27 06:30:52.586482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eed4e8 00:28:33.179 [2024-11-27 06:30:52.587415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.179 [2024-11-27 06:30:52.587439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:33.179 [2024-11-27 06:30:52.594843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eee190 00:28:33.179 [2024-11-27 06:30:52.595823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.179 [2024-11-27 06:30:52.595845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:33.179 [2024-11-27 06:30:52.603545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef1430 00:28:33.179 [2024-11-27 06:30:52.604521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.179 [2024-11-27 06:30:52.604543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:33.179 [2024-11-27 06:30:52.611703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee0ea0 00:28:33.179 [2024-11-27 06:30:52.612575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.179 [2024-11-27 06:30:52.612598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:33.179 [2024-11-27 06:30:52.619801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef3e60 00:28:33.179 [2024-11-27 06:30:52.620548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.179 [2024-11-27 06:30:52.620570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:33.179 [2024-11-27 06:30:52.627965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef6020 00:28:33.179 [2024-11-27 06:30:52.628604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.179 [2024-11-27 06:30:52.628626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:33.179 [2024-11-27 06:30:52.636087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ef1ca0 00:28:33.179 [2024-11-27 06:30:52.636600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.179 [2024-11-27 06:30:52.636622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:33.179 [2024-11-27 06:30:52.646543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016ee73e0 00:28:33.179 [2024-11-27 06:30:52.647531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.179 [2024-11-27 06:30:52.647553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:33.179 [2024-11-27 06:30:52.654667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016efdeb0 00:28:33.179 [2024-11-27 06:30:52.655544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.179 [2024-11-27 06:30:52.655567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:33.179 [2024-11-27 06:30:52.662795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd1740) with pdu=0x200016eeb328 00:28:33.179 [2024-11-27 06:30:52.663555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.179 [2024-11-27 06:30:52.663576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:33.179 29464.50 IOPS, 115.10 MiB/s 00:28:33.179 Latency(us) 00:28:33.179 [2024-11-27T06:30:52.709Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:33.179 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:33.179 nvme0n1 : 2.00 29470.19 115.12 0.00 0.00 4337.81 1739.22 12905.55 00:28:33.179 [2024-11-27T06:30:52.709Z] =================================================================================================================== 00:28:33.179 [2024-11-27T06:30:52.709Z] Total : 29470.19 115.12 0.00 0.00 4337.81 1739.22 12905.55 00:28:33.179 { 00:28:33.179 "results": [ 00:28:33.179 { 00:28:33.179 "job": "nvme0n1", 00:28:33.179 "core_mask": "0x2", 00:28:33.179 "workload": "randwrite", 00:28:33.179 "status": "finished", 00:28:33.179 "queue_depth": 128, 00:28:33.179 "io_size": 4096, 00:28:33.179 "runtime": 2.003957, 00:28:33.179 "iops": 29470.19322270887, 00:28:33.179 "mibps": 115.11794227620652, 00:28:33.179 "io_failed": 0, 00:28:33.179 "io_timeout": 0, 00:28:33.179 "avg_latency_us": 4337.812855533311, 00:28:33.179 "min_latency_us": 1739.2246153846154, 00:28:33.179 "max_latency_us": 12905.55076923077 00:28:33.179 } 00:28:33.179 ], 00:28:33.179 "core_count": 1 00:28:33.179 } 00:28:33.179 06:30:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:33.179 06:30:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:33.179 06:30:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:33.179 06:30:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:33.179 | .driver_specific 00:28:33.179 | .nvme_error 00:28:33.179 | .status_code 00:28:33.179 | .command_transient_transport_error' 00:28:33.440 06:30:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 231 > 0 )) 00:28:33.440 06:30:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 113663 00:28:33.440 06:30:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 113663 ']' 00:28:33.440 06:30:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 113663 00:28:33.440 06:30:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:33.440 06:30:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:33.440 06:30:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 113663 00:28:33.440 06:30:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:33.440 06:30:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:33.440 killing process with pid 113663 00:28:33.440 Received shutdown signal, test time was about 2.000000 seconds 00:28:33.440 00:28:33.440 Latency(us) 00:28:33.440 [2024-11-27T06:30:52.970Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:33.440 [2024-11-27T06:30:52.970Z] =================================================================================================================== 00:28:33.440 [2024-11-27T06:30:52.970Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:33.440 06:30:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 113663' 00:28:33.440 06:30:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 113663 00:28:33.440 06:30:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 113663 00:28:33.701 06:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:33.701 06:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:33.701 06:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:33.701 06:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:33.701 06:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:33.701 06:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:33.701 06:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=113742 00:28:33.701 06:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 113742 /var/tmp/bperf.sock 00:28:33.701 06:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 113742 ']' 00:28:33.701 06:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:33.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:33.701 06:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:33.701 06:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:33.701 06:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:33.701 06:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:33.701 [2024-11-27 06:30:53.051086] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:28:33.702 [2024-11-27 06:30:53.051138] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113742 ] 00:28:33.702 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:33.702 Zero copy mechanism will not be used. 00:28:33.702 [2024-11-27 06:30:53.162091] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:33.702 [2024-11-27 06:30:53.186279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.702 [2024-11-27 06:30:53.203586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.962 06:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:33.962 06:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:33.962 06:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:33.962 06:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:34.223 06:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:34.223 06:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.223 06:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:34.223 06:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.223 06:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:34.223 06:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:34.485 nvme0n1 00:28:34.485 06:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:34.485 06:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.485 06:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:34.485 06:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.485 06:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:34.485 06:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:34.485 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:34.485 Zero copy mechanism will not be used. 00:28:34.485 Running I/O for 2 seconds... 00:28:34.485 [2024-11-27 06:30:53.871630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.485 [2024-11-27 06:30:53.871699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.485 [2024-11-27 06:30:53.871718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.485 [2024-11-27 06:30:53.874674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.485 [2024-11-27 06:30:53.874735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.485 [2024-11-27 06:30:53.874749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.485 [2024-11-27 06:30:53.877478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.485 [2024-11-27 06:30:53.877537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.485 [2024-11-27 06:30:53.877549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.485 [2024-11-27 06:30:53.880271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.485 [2024-11-27 06:30:53.880334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.485 [2024-11-27 06:30:53.880347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.485 [2024-11-27 06:30:53.883216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.485 [2024-11-27 06:30:53.883267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.485 [2024-11-27 06:30:53.883280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.485 [2024-11-27 06:30:53.886027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.485 [2024-11-27 06:30:53.886077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.485 [2024-11-27 06:30:53.886090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.485 [2024-11-27 06:30:53.888835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.485 [2024-11-27 06:30:53.888915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.485 [2024-11-27 06:30:53.888927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.485 [2024-11-27 06:30:53.891648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.485 [2024-11-27 06:30:53.891696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.485 [2024-11-27 06:30:53.891708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.485 [2024-11-27 06:30:53.894467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.485 [2024-11-27 06:30:53.894515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.485 [2024-11-27 06:30:53.894528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.485 [2024-11-27 06:30:53.897283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.485 [2024-11-27 06:30:53.897330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.485 [2024-11-27 06:30:53.897342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.485 [2024-11-27 06:30:53.900103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.486 [2024-11-27 06:30:53.900154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.486 [2024-11-27 06:30:53.900166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.486 [2024-11-27 06:30:53.902889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.486 [2024-11-27 06:30:53.902960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.486 [2024-11-27 06:30:53.902972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.486 [2024-11-27 06:30:53.905679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.486 [2024-11-27 06:30:53.905742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.486 [2024-11-27 06:30:53.905755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.486 [2024-11-27 06:30:53.908499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.486 [2024-11-27 06:30:53.908553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.486 [2024-11-27 06:30:53.908566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.486 [2024-11-27 06:30:53.911298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.486 [2024-11-27 06:30:53.911347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.486 [2024-11-27 06:30:53.911359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.486 [2024-11-27 06:30:53.914113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.486 [2024-11-27 06:30:53.914165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.486 [2024-11-27 06:30:53.914177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.486 [2024-11-27 06:30:53.916887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.486 [2024-11-27 06:30:53.916950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.486 [2024-11-27 06:30:53.916962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.486 [2024-11-27 06:30:53.919663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.486 [2024-11-27 06:30:53.919712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.486 [2024-11-27 06:30:53.919724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.486 [2024-11-27 06:30:53.922454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.486 [2024-11-27 06:30:53.922517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.486 [2024-11-27 06:30:53.922529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.486 [2024-11-27 06:30:53.925292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.486 [2024-11-27 06:30:53.925339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.486 [2024-11-27 06:30:53.925351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.486 [2024-11-27 06:30:53.928083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.486 [2024-11-27 06:30:53.928131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.486 [2024-11-27 06:30:53.928143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.486 [2024-11-27 06:30:53.930886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.486 [2024-11-27 06:30:53.930968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.486 [2024-11-27 06:30:53.930980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.486 [2024-11-27 06:30:53.933723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.486 [2024-11-27 06:30:53.933781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.486 [2024-11-27 06:30:53.933793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.486 [2024-11-27 06:30:53.936497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.486 [2024-11-27 06:30:53.936567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.486 [2024-11-27 06:30:53.936579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.486 [2024-11-27 06:30:53.939285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.486 [2024-11-27 06:30:53.939351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.486 [2024-11-27 06:30:53.939363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.486 [2024-11-27 06:30:53.942119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.486 [2024-11-27 06:30:53.942182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.486 [2024-11-27 06:30:53.942194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.486 [2024-11-27 06:30:53.944880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.486 [2024-11-27 06:30:53.944952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.486 [2024-11-27 06:30:53.944964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.486 [2024-11-27 06:30:53.947692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.486 [2024-11-27 06:30:53.947742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.486 [2024-11-27 06:30:53.947754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.486 [2024-11-27 06:30:53.950480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.486 [2024-11-27 06:30:53.950554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.486 [2024-11-27 06:30:53.950566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.486 [2024-11-27 06:30:53.953292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.486 [2024-11-27 06:30:53.953357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.486 [2024-11-27 06:30:53.953369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.486 [2024-11-27 06:30:53.956083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.486 [2024-11-27 06:30:53.956149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.486 [2024-11-27 06:30:53.956161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.486 [2024-11-27 06:30:53.958937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.486 [2024-11-27 06:30:53.958990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.486 [2024-11-27 06:30:53.959003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.486 [2024-11-27 06:30:53.961719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.486 [2024-11-27 06:30:53.961778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.486 [2024-11-27 06:30:53.961790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.486 [2024-11-27 06:30:53.964525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.486 [2024-11-27 06:30:53.964574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.486 [2024-11-27 06:30:53.964586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.486 [2024-11-27 06:30:53.967347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.486 [2024-11-27 06:30:53.967395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.486 [2024-11-27 06:30:53.967407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.486 [2024-11-27 06:30:53.970147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.486 [2024-11-27 06:30:53.970196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.486 [2024-11-27 06:30:53.970208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.486 [2024-11-27 06:30:53.972939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.486 [2024-11-27 06:30:53.972981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.486 [2024-11-27 06:30:53.972993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.487 [2024-11-27 06:30:53.975669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.487 [2024-11-27 06:30:53.975732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.487 [2024-11-27 06:30:53.975744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.487 [2024-11-27 06:30:53.978457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.487 [2024-11-27 06:30:53.978528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.487 [2024-11-27 06:30:53.978540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.487 [2024-11-27 06:30:53.981227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.487 [2024-11-27 06:30:53.981290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.487 [2024-11-27 06:30:53.981302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.487 [2024-11-27 06:30:53.984022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.487 [2024-11-27 06:30:53.984084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.487 [2024-11-27 06:30:53.984096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.487 [2024-11-27 06:30:53.986838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.487 [2024-11-27 06:30:53.986886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.487 [2024-11-27 06:30:53.986898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.487 [2024-11-27 06:30:53.989668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.487 [2024-11-27 06:30:53.989716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.487 [2024-11-27 06:30:53.989728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.487 [2024-11-27 06:30:53.992514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.487 [2024-11-27 06:30:53.992567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.487 [2024-11-27 06:30:53.992579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.487 [2024-11-27 06:30:53.995376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.487 [2024-11-27 06:30:53.995425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.487 [2024-11-27 06:30:53.995438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.487 [2024-11-27 06:30:53.998240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.487 [2024-11-27 06:30:53.998294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.487 [2024-11-27 06:30:53.998306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.487 [2024-11-27 06:30:54.001026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.487 [2024-11-27 06:30:54.001083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.487 [2024-11-27 06:30:54.001095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.487 [2024-11-27 06:30:54.003814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.487 [2024-11-27 06:30:54.003885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.487 [2024-11-27 06:30:54.003897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.487 [2024-11-27 06:30:54.006619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.487 [2024-11-27 06:30:54.006666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.487 [2024-11-27 06:30:54.006678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.487 [2024-11-27 06:30:54.009442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.487 [2024-11-27 06:30:54.009567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.487 [2024-11-27 06:30:54.009597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.487 [2024-11-27 06:30:54.012233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.487 [2024-11-27 06:30:54.012292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.487 [2024-11-27 06:30:54.012308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.751 [2024-11-27 06:30:54.015035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.751 [2024-11-27 06:30:54.015095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.751 [2024-11-27 06:30:54.015108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.751 [2024-11-27 06:30:54.017808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.751 [2024-11-27 06:30:54.017895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.751 [2024-11-27 06:30:54.017919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.751 [2024-11-27 06:30:54.020676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.751 [2024-11-27 06:30:54.020738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.751 [2024-11-27 06:30:54.020753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.751 [2024-11-27 06:30:54.023527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.751 [2024-11-27 06:30:54.023598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.751 [2024-11-27 06:30:54.023611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.751 [2024-11-27 06:30:54.026429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.751 [2024-11-27 06:30:54.026480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.751 [2024-11-27 06:30:54.026493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.751 [2024-11-27 06:30:54.029269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.751 [2024-11-27 06:30:54.029334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.751 [2024-11-27 06:30:54.029347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.751 [2024-11-27 06:30:54.032132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.751 [2024-11-27 06:30:54.032188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.751 [2024-11-27 06:30:54.032200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.751 [2024-11-27 06:30:54.035062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.751 [2024-11-27 06:30:54.035168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.751 [2024-11-27 06:30:54.035199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.751 [2024-11-27 06:30:54.038141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.751 [2024-11-27 06:30:54.038262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.751 [2024-11-27 06:30:54.038290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.751 [2024-11-27 06:30:54.041249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.751 [2024-11-27 06:30:54.041366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.751 [2024-11-27 06:30:54.041390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.751 [2024-11-27 06:30:54.044307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.751 [2024-11-27 06:30:54.044414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.751 [2024-11-27 06:30:54.044432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.751 [2024-11-27 06:30:54.047360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.751 [2024-11-27 06:30:54.047466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.751 [2024-11-27 06:30:54.047494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.751 [2024-11-27 06:30:54.050371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.751 [2024-11-27 06:30:54.050491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.751 [2024-11-27 06:30:54.050518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.751 [2024-11-27 06:30:54.053354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.751 [2024-11-27 06:30:54.053513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.751 [2024-11-27 06:30:54.053539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.751 [2024-11-27 06:30:54.056327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.751 [2024-11-27 06:30:54.056434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.751 [2024-11-27 06:30:54.056459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.751 [2024-11-27 06:30:54.059343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.751 [2024-11-27 06:30:54.059512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.751 [2024-11-27 06:30:54.059538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.751 [2024-11-27 06:30:54.062325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.751 [2024-11-27 06:30:54.062445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.751 [2024-11-27 06:30:54.062470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.751 [2024-11-27 06:30:54.065313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.751 [2024-11-27 06:30:54.065471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.751 [2024-11-27 06:30:54.065497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.751 [2024-11-27 06:30:54.068266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.751 [2024-11-27 06:30:54.068385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.751 [2024-11-27 06:30:54.068410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.751 [2024-11-27 06:30:54.071264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.751 [2024-11-27 06:30:54.071396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.751 [2024-11-27 06:30:54.071422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.751 [2024-11-27 06:30:54.074182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.751 [2024-11-27 06:30:54.074350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.751 [2024-11-27 06:30:54.074375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.751 [2024-11-27 06:30:54.077073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.751 [2024-11-27 06:30:54.077218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.751 [2024-11-27 06:30:54.077243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.751 [2024-11-27 06:30:54.079961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.751 [2024-11-27 06:30:54.080115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.751 [2024-11-27 06:30:54.080139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.751 [2024-11-27 06:30:54.082836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.751 [2024-11-27 06:30:54.082964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.751 [2024-11-27 06:30:54.082988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.751 [2024-11-27 06:30:54.085769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.751 [2024-11-27 06:30:54.085867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.751 [2024-11-27 06:30:54.085922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.751 [2024-11-27 06:30:54.088679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.751 [2024-11-27 06:30:54.088829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.751 [2024-11-27 06:30:54.088856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.752 [2024-11-27 06:30:54.091573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.752 [2024-11-27 06:30:54.091674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.752 [2024-11-27 06:30:54.091698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.752 [2024-11-27 06:30:54.094480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.752 [2024-11-27 06:30:54.094593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.752 [2024-11-27 06:30:54.094618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.752 [2024-11-27 06:30:54.097402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.752 [2024-11-27 06:30:54.097518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.752 [2024-11-27 06:30:54.097548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.752 [2024-11-27 06:30:54.100006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.752 [2024-11-27 06:30:54.100345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.752 [2024-11-27 06:30:54.100380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.752 [2024-11-27 06:30:54.102772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.752 [2024-11-27 06:30:54.103127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.752 [2024-11-27 06:30:54.103161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.752 [2024-11-27 06:30:54.105544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.752 [2024-11-27 06:30:54.105874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.752 [2024-11-27 06:30:54.105920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.752 [2024-11-27 06:30:54.108310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.752 [2024-11-27 06:30:54.108619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.752 [2024-11-27 06:30:54.108653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.752 [2024-11-27 06:30:54.111088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.752 [2024-11-27 06:30:54.111401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.752 [2024-11-27 06:30:54.111433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.752 [2024-11-27 06:30:54.113834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.752 [2024-11-27 06:30:54.114168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.752 [2024-11-27 06:30:54.114201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.752 [2024-11-27 06:30:54.116618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.752 [2024-11-27 06:30:54.116954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.752 [2024-11-27 06:30:54.116985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.752 [2024-11-27 06:30:54.119391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.752 [2024-11-27 06:30:54.119719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.752 [2024-11-27 06:30:54.119752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.752 [2024-11-27 06:30:54.122168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.752 [2024-11-27 06:30:54.122495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.752 [2024-11-27 06:30:54.122526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.752 [2024-11-27 06:30:54.124920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.752 [2024-11-27 06:30:54.125232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.752 [2024-11-27 06:30:54.125263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.752 [2024-11-27 06:30:54.127664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.752 [2024-11-27 06:30:54.127899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.752 [2024-11-27 06:30:54.127934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.752 [2024-11-27 06:30:54.130326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.752 [2024-11-27 06:30:54.130552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.752 [2024-11-27 06:30:54.130578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.752 [2024-11-27 06:30:54.132994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.752 [2024-11-27 06:30:54.133225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.752 [2024-11-27 06:30:54.133249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.752 [2024-11-27 06:30:54.135616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.752 [2024-11-27 06:30:54.135847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.752 [2024-11-27 06:30:54.135874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.752 [2024-11-27 06:30:54.138273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.752 [2024-11-27 06:30:54.138506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.752 [2024-11-27 06:30:54.138528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.752 [2024-11-27 06:30:54.141011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.752 [2024-11-27 06:30:54.141247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.752 [2024-11-27 06:30:54.141273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.752 [2024-11-27 06:30:54.143738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.752 [2024-11-27 06:30:54.144007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.752 [2024-11-27 06:30:54.144032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.752 [2024-11-27 06:30:54.146509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.752 [2024-11-27 06:30:54.146744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.752 [2024-11-27 06:30:54.146767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.752 [2024-11-27 06:30:54.149236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.752 [2024-11-27 06:30:54.149472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.752 [2024-11-27 06:30:54.149495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.752 [2024-11-27 06:30:54.151978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.752 [2024-11-27 06:30:54.152210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.752 [2024-11-27 06:30:54.152233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.752 [2024-11-27 06:30:54.154709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.752 [2024-11-27 06:30:54.154952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.752 [2024-11-27 06:30:54.154971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.752 [2024-11-27 06:30:54.157455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.752 [2024-11-27 06:30:54.157687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.752 [2024-11-27 06:30:54.157706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.752 [2024-11-27 06:30:54.160189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.752 [2024-11-27 06:30:54.160426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.752 [2024-11-27 06:30:54.160446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.752 [2024-11-27 06:30:54.162939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.752 [2024-11-27 06:30:54.163171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.752 [2024-11-27 06:30:54.163194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.752 [2024-11-27 06:30:54.165659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.753 [2024-11-27 06:30:54.165894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.753 [2024-11-27 06:30:54.165925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.753 [2024-11-27 06:30:54.168420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.753 [2024-11-27 06:30:54.168656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.753 [2024-11-27 06:30:54.168676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.753 [2024-11-27 06:30:54.171200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.753 [2024-11-27 06:30:54.171431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.753 [2024-11-27 06:30:54.171451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.753 [2024-11-27 06:30:54.173937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.753 [2024-11-27 06:30:54.174175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.753 [2024-11-27 06:30:54.174199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.753 [2024-11-27 06:30:54.176669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.753 [2024-11-27 06:30:54.176924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.753 [2024-11-27 06:30:54.176943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.753 [2024-11-27 06:30:54.179436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.753 [2024-11-27 06:30:54.179673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.753 [2024-11-27 06:30:54.179692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.753 [2024-11-27 06:30:54.182185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.753 [2024-11-27 06:30:54.182416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.753 [2024-11-27 06:30:54.182439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.753 [2024-11-27 06:30:54.184928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.753 [2024-11-27 06:30:54.185159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.753 [2024-11-27 06:30:54.185182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.753 [2024-11-27 06:30:54.187637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.753 [2024-11-27 06:30:54.187868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.753 [2024-11-27 06:30:54.187893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.753 [2024-11-27 06:30:54.190356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.753 [2024-11-27 06:30:54.190588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.753 [2024-11-27 06:30:54.190608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.753 [2024-11-27 06:30:54.193107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.753 [2024-11-27 06:30:54.193344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.753 [2024-11-27 06:30:54.193364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.753 [2024-11-27 06:30:54.195810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.753 [2024-11-27 06:30:54.196058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.753 [2024-11-27 06:30:54.196078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.753 [2024-11-27 06:30:54.198546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.753 [2024-11-27 06:30:54.198777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.753 [2024-11-27 06:30:54.198796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.753 [2024-11-27 06:30:54.201254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.753 [2024-11-27 06:30:54.201482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.753 [2024-11-27 06:30:54.201502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.753 [2024-11-27 06:30:54.203921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.753 [2024-11-27 06:30:54.204150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.753 [2024-11-27 06:30:54.204169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.753 [2024-11-27 06:30:54.206565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.753 [2024-11-27 06:30:54.206796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.753 [2024-11-27 06:30:54.206820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.753 [2024-11-27 06:30:54.209245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.753 [2024-11-27 06:30:54.209475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.753 [2024-11-27 06:30:54.209497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.753 [2024-11-27 06:30:54.211918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.753 [2024-11-27 06:30:54.212142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.753 [2024-11-27 06:30:54.212164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.753 [2024-11-27 06:30:54.214562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.753 [2024-11-27 06:30:54.214789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.753 [2024-11-27 06:30:54.214811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.753 [2024-11-27 06:30:54.217242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.753 [2024-11-27 06:30:54.217476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.753 [2024-11-27 06:30:54.217500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.753 [2024-11-27 06:30:54.219975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.753 [2024-11-27 06:30:54.220199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.753 [2024-11-27 06:30:54.220220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.753 [2024-11-27 06:30:54.222725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.753 [2024-11-27 06:30:54.222972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.753 [2024-11-27 06:30:54.222996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.753 [2024-11-27 06:30:54.225464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.753 [2024-11-27 06:30:54.225697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.753 [2024-11-27 06:30:54.225720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.753 [2024-11-27 06:30:54.228145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.753 [2024-11-27 06:30:54.228379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.753 [2024-11-27 06:30:54.228398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.753 [2024-11-27 06:30:54.230827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.753 [2024-11-27 06:30:54.231080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.753 [2024-11-27 06:30:54.231100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.753 [2024-11-27 06:30:54.233602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.753 [2024-11-27 06:30:54.233835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.753 [2024-11-27 06:30:54.233854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.753 [2024-11-27 06:30:54.236354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.753 [2024-11-27 06:30:54.236585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.753 [2024-11-27 06:30:54.236605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.753 [2024-11-27 06:30:54.239098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.754 [2024-11-27 06:30:54.239332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.754 [2024-11-27 06:30:54.239352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.754 [2024-11-27 06:30:54.241807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.754 [2024-11-27 06:30:54.242049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.754 [2024-11-27 06:30:54.242072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.754 [2024-11-27 06:30:54.244461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.754 [2024-11-27 06:30:54.244680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.754 [2024-11-27 06:30:54.244704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.754 [2024-11-27 06:30:54.247096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.754 [2024-11-27 06:30:54.247323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.754 [2024-11-27 06:30:54.247346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.754 [2024-11-27 06:30:54.249775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.754 [2024-11-27 06:30:54.250132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.754 [2024-11-27 06:30:54.250170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.754 [2024-11-27 06:30:54.252528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.754 [2024-11-27 06:30:54.252769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.754 [2024-11-27 06:30:54.252795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.754 [2024-11-27 06:30:54.255223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.754 [2024-11-27 06:30:54.255461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.754 [2024-11-27 06:30:54.255483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.754 [2024-11-27 06:30:54.257934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.754 [2024-11-27 06:30:54.258176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.754 [2024-11-27 06:30:54.258202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.754 [2024-11-27 06:30:54.260591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.754 [2024-11-27 06:30:54.260831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.754 [2024-11-27 06:30:54.260860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.754 [2024-11-27 06:30:54.263234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.754 [2024-11-27 06:30:54.263466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.754 [2024-11-27 06:30:54.263489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.754 [2024-11-27 06:30:54.265900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.754 [2024-11-27 06:30:54.266140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.754 [2024-11-27 06:30:54.266166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.754 [2024-11-27 06:30:54.268555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.754 [2024-11-27 06:30:54.268790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.754 [2024-11-27 06:30:54.268809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.754 [2024-11-27 06:30:54.271216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.754 [2024-11-27 06:30:54.271447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.754 [2024-11-27 06:30:54.271469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.754 [2024-11-27 06:30:54.273868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.754 [2024-11-27 06:30:54.274108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.754 [2024-11-27 06:30:54.274133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.754 [2024-11-27 06:30:54.276526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:34.754 [2024-11-27 06:30:54.276770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.754 [2024-11-27 06:30:54.276792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.018 [2024-11-27 06:30:54.279199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.018 [2024-11-27 06:30:54.279431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.018 [2024-11-27 06:30:54.279455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.018 [2024-11-27 06:30:54.281875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.018 [2024-11-27 06:30:54.282126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.018 [2024-11-27 06:30:54.282150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.018 [2024-11-27 06:30:54.284516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.018 [2024-11-27 06:30:54.284755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.018 [2024-11-27 06:30:54.284777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.018 [2024-11-27 06:30:54.287205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.018 [2024-11-27 06:30:54.287436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.018 [2024-11-27 06:30:54.287458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.018 [2024-11-27 06:30:54.289888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.018 [2024-11-27 06:30:54.290127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.018 [2024-11-27 06:30:54.290151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.018 [2024-11-27 06:30:54.292563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.018 [2024-11-27 06:30:54.292800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.018 [2024-11-27 06:30:54.292819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.018 [2024-11-27 06:30:54.295251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.018 [2024-11-27 06:30:54.295478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.018 [2024-11-27 06:30:54.295499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.018 [2024-11-27 06:30:54.297918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.018 [2024-11-27 06:30:54.298149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.018 [2024-11-27 06:30:54.298172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.018 [2024-11-27 06:30:54.300563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.019 [2024-11-27 06:30:54.300915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.019 [2024-11-27 06:30:54.300935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.019 [2024-11-27 06:30:54.303348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.019 [2024-11-27 06:30:54.303574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.019 [2024-11-27 06:30:54.303599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.019 [2024-11-27 06:30:54.306016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.019 [2024-11-27 06:30:54.306246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.019 [2024-11-27 06:30:54.306269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.019 [2024-11-27 06:30:54.308671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.019 [2024-11-27 06:30:54.309015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.019 [2024-11-27 06:30:54.309050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.019 [2024-11-27 06:30:54.311448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.019 [2024-11-27 06:30:54.311778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.019 [2024-11-27 06:30:54.311813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.019 [2024-11-27 06:30:54.314280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.019 [2024-11-27 06:30:54.314606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.019 [2024-11-27 06:30:54.314637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.019 [2024-11-27 06:30:54.317047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.019 [2024-11-27 06:30:54.317288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.019 [2024-11-27 06:30:54.317313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.019 [2024-11-27 06:30:54.319670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.019 [2024-11-27 06:30:54.319914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.019 [2024-11-27 06:30:54.319936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.019 [2024-11-27 06:30:54.322329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.019 [2024-11-27 06:30:54.322669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.019 [2024-11-27 06:30:54.322706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.019 [2024-11-27 06:30:54.325129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.019 [2024-11-27 06:30:54.325412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.019 [2024-11-27 06:30:54.325448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.019 [2024-11-27 06:30:54.327790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.019 [2024-11-27 06:30:54.328114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.019 [2024-11-27 06:30:54.328153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.019 [2024-11-27 06:30:54.330516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.019 [2024-11-27 06:30:54.330830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.019 [2024-11-27 06:30:54.330867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.019 [2024-11-27 06:30:54.333237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.019 [2024-11-27 06:30:54.333544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.019 [2024-11-27 06:30:54.333584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.019 [2024-11-27 06:30:54.335945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.019 [2024-11-27 06:30:54.336254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.019 [2024-11-27 06:30:54.336294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.019 [2024-11-27 06:30:54.338683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.019 [2024-11-27 06:30:54.339008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.019 [2024-11-27 06:30:54.339047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.019 [2024-11-27 06:30:54.341430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.019 [2024-11-27 06:30:54.341743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.019 [2024-11-27 06:30:54.341782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.019 [2024-11-27 06:30:54.344193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.019 [2024-11-27 06:30:54.344510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.019 [2024-11-27 06:30:54.344549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.019 [2024-11-27 06:30:54.346960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.019 [2024-11-27 06:30:54.347273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.019 [2024-11-27 06:30:54.347316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.019 [2024-11-27 06:30:54.349712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.019 [2024-11-27 06:30:54.350029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.019 [2024-11-27 06:30:54.350068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.019 [2024-11-27 06:30:54.352450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.019 [2024-11-27 06:30:54.352756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.019 [2024-11-27 06:30:54.352786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.019 [2024-11-27 06:30:54.355198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.019 [2024-11-27 06:30:54.355507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.019 [2024-11-27 06:30:54.355545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.019 [2024-11-27 06:30:54.357916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.019 [2024-11-27 06:30:54.358228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.019 [2024-11-27 06:30:54.358267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.019 [2024-11-27 06:30:54.360642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.019 [2024-11-27 06:30:54.361035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.019 [2024-11-27 06:30:54.361068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.019 [2024-11-27 06:30:54.363420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.019 [2024-11-27 06:30:54.363726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.019 [2024-11-27 06:30:54.363762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.019 [2024-11-27 06:30:54.366150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.019 [2024-11-27 06:30:54.366470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.019 [2024-11-27 06:30:54.366506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.019 [2024-11-27 06:30:54.369022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.019 [2024-11-27 06:30:54.369338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.019 [2024-11-27 06:30:54.369374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.019 [2024-11-27 06:30:54.371747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.019 [2024-11-27 06:30:54.372074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.019 [2024-11-27 06:30:54.372109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.019 [2024-11-27 06:30:54.374461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.020 [2024-11-27 06:30:54.374756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.020 [2024-11-27 06:30:54.374792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.020 [2024-11-27 06:30:54.377172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.020 [2024-11-27 06:30:54.377487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.020 [2024-11-27 06:30:54.377523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.020 [2024-11-27 06:30:54.379940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.020 [2024-11-27 06:30:54.380252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.020 [2024-11-27 06:30:54.380290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.020 [2024-11-27 06:30:54.382616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.020 [2024-11-27 06:30:54.382930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.020 [2024-11-27 06:30:54.382965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.020 [2024-11-27 06:30:54.385311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.020 [2024-11-27 06:30:54.385609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.020 [2024-11-27 06:30:54.385650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.020 [2024-11-27 06:30:54.388023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.020 [2024-11-27 06:30:54.388320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.020 [2024-11-27 06:30:54.388359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.020 [2024-11-27 06:30:54.390730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.020 [2024-11-27 06:30:54.391057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.020 [2024-11-27 06:30:54.391097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.020 [2024-11-27 06:30:54.393476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.020 [2024-11-27 06:30:54.393790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.020 [2024-11-27 06:30:54.393826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.020 [2024-11-27 06:30:54.396207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.020 [2024-11-27 06:30:54.396520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.020 [2024-11-27 06:30:54.396558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.020 [2024-11-27 06:30:54.398928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.020 [2024-11-27 06:30:54.399248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.020 [2024-11-27 06:30:54.399288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.020 [2024-11-27 06:30:54.401709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.020 [2024-11-27 06:30:54.402044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.020 [2024-11-27 06:30:54.402082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.020 [2024-11-27 06:30:54.404443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.020 [2024-11-27 06:30:54.404771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.020 [2024-11-27 06:30:54.404814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.020 [2024-11-27 06:30:54.407174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.020 [2024-11-27 06:30:54.407494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.020 [2024-11-27 06:30:54.407536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.020 [2024-11-27 06:30:54.409970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.020 [2024-11-27 06:30:54.410267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.020 [2024-11-27 06:30:54.410307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.020 [2024-11-27 06:30:54.412669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.020 [2024-11-27 06:30:54.412993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.020 [2024-11-27 06:30:54.413031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.020 [2024-11-27 06:30:54.415383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.020 [2024-11-27 06:30:54.415681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.020 [2024-11-27 06:30:54.415721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.020 [2024-11-27 06:30:54.418135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.020 [2024-11-27 06:30:54.418451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.020 [2024-11-27 06:30:54.418491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.020 [2024-11-27 06:30:54.420885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.020 [2024-11-27 06:30:54.421215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.020 [2024-11-27 06:30:54.421256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.020 [2024-11-27 06:30:54.423649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.020 [2024-11-27 06:30:54.423973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.020 [2024-11-27 06:30:54.424012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.020 [2024-11-27 06:30:54.426359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.020 [2024-11-27 06:30:54.426669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.020 [2024-11-27 06:30:54.426706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.020 [2024-11-27 06:30:54.429085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.020 [2024-11-27 06:30:54.429398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.020 [2024-11-27 06:30:54.429434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.020 [2024-11-27 06:30:54.431831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.020 [2024-11-27 06:30:54.432161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.020 [2024-11-27 06:30:54.432201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.020 [2024-11-27 06:30:54.434653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.020 [2024-11-27 06:30:54.435005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.020 [2024-11-27 06:30:54.435044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.020 [2024-11-27 06:30:54.437483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.020 [2024-11-27 06:30:54.437798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.020 [2024-11-27 06:30:54.437834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.020 [2024-11-27 06:30:54.440264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.020 [2024-11-27 06:30:54.440565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.020 [2024-11-27 06:30:54.440606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.020 [2024-11-27 06:30:54.443042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.020 [2024-11-27 06:30:54.443365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.020 [2024-11-27 06:30:54.443406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.020 [2024-11-27 06:30:54.445836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.020 [2024-11-27 06:30:54.446172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.020 [2024-11-27 06:30:54.446211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.020 [2024-11-27 06:30:54.448591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.020 [2024-11-27 06:30:54.448941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.021 [2024-11-27 06:30:54.448979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.021 [2024-11-27 06:30:54.451418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.021 [2024-11-27 06:30:54.451743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.021 [2024-11-27 06:30:54.451781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.021 [2024-11-27 06:30:54.454220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.021 [2024-11-27 06:30:54.454609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.021 [2024-11-27 06:30:54.454648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.021 [2024-11-27 06:30:54.457088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.021 [2024-11-27 06:30:54.457423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.021 [2024-11-27 06:30:54.457464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.021 [2024-11-27 06:30:54.459879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.021 [2024-11-27 06:30:54.460215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.021 [2024-11-27 06:30:54.460255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.021 [2024-11-27 06:30:54.462707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.021 [2024-11-27 06:30:54.463012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.021 [2024-11-27 06:30:54.463051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.021 [2024-11-27 06:30:54.465459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.021 [2024-11-27 06:30:54.465760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.021 [2024-11-27 06:30:54.465799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.021 [2024-11-27 06:30:54.468212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.021 [2024-11-27 06:30:54.468520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.021 [2024-11-27 06:30:54.468559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.021 [2024-11-27 06:30:54.471030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.021 [2024-11-27 06:30:54.471357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.021 [2024-11-27 06:30:54.471398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.021 [2024-11-27 06:30:54.473817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.021 [2024-11-27 06:30:54.474151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.021 [2024-11-27 06:30:54.474190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.021 [2024-11-27 06:30:54.476602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.021 [2024-11-27 06:30:54.476936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.021 [2024-11-27 06:30:54.476975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.021 [2024-11-27 06:30:54.479405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.021 [2024-11-27 06:30:54.479719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.021 [2024-11-27 06:30:54.479757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.021 [2024-11-27 06:30:54.482256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.021 [2024-11-27 06:30:54.482570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.021 [2024-11-27 06:30:54.482609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.021 [2024-11-27 06:30:54.485070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.021 [2024-11-27 06:30:54.485383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.021 [2024-11-27 06:30:54.485422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.021 [2024-11-27 06:30:54.487827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.021 [2024-11-27 06:30:54.488148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.021 [2024-11-27 06:30:54.488186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.021 [2024-11-27 06:30:54.490588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.021 [2024-11-27 06:30:54.490883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.021 [2024-11-27 06:30:54.490932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.021 [2024-11-27 06:30:54.493375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.021 [2024-11-27 06:30:54.493595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.021 [2024-11-27 06:30:54.493623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.021 [2024-11-27 06:30:54.496064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.021 [2024-11-27 06:30:54.496284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.021 [2024-11-27 06:30:54.496310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.021 [2024-11-27 06:30:54.498722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.021 [2024-11-27 06:30:54.498951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.021 [2024-11-27 06:30:54.498975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.021 [2024-11-27 06:30:54.501415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.021 [2024-11-27 06:30:54.501631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.021 [2024-11-27 06:30:54.501655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.021 [2024-11-27 06:30:54.504100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.021 [2024-11-27 06:30:54.504317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.021 [2024-11-27 06:30:54.504340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.021 [2024-11-27 06:30:54.506772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.021 [2024-11-27 06:30:54.507001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.021 [2024-11-27 06:30:54.507025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.021 [2024-11-27 06:30:54.509407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.021 [2024-11-27 06:30:54.509621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.021 [2024-11-27 06:30:54.509645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.021 [2024-11-27 06:30:54.512025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.021 [2024-11-27 06:30:54.512238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.021 [2024-11-27 06:30:54.512262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.021 [2024-11-27 06:30:54.514614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.021 [2024-11-27 06:30:54.514834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.021 [2024-11-27 06:30:54.514859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.021 [2024-11-27 06:30:54.517316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.021 [2024-11-27 06:30:54.517532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.021 [2024-11-27 06:30:54.517555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.021 [2024-11-27 06:30:54.519983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.021 [2024-11-27 06:30:54.520201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.021 [2024-11-27 06:30:54.520226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.021 [2024-11-27 06:30:54.522667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.021 [2024-11-27 06:30:54.522884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.022 [2024-11-27 06:30:54.522923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.022 [2024-11-27 06:30:54.525342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.022 [2024-11-27 06:30:54.525556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.022 [2024-11-27 06:30:54.525576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.022 [2024-11-27 06:30:54.527929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.022 [2024-11-27 06:30:54.528141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.022 [2024-11-27 06:30:54.528161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.022 [2024-11-27 06:30:54.530527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.022 [2024-11-27 06:30:54.530739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.022 [2024-11-27 06:30:54.530758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.022 [2024-11-27 06:30:54.533159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.022 [2024-11-27 06:30:54.533372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.022 [2024-11-27 06:30:54.533392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.022 [2024-11-27 06:30:54.535755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.022 [2024-11-27 06:30:54.535980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.022 [2024-11-27 06:30:54.535998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.022 [2024-11-27 06:30:54.538370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.022 [2024-11-27 06:30:54.538584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.022 [2024-11-27 06:30:54.538604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.022 [2024-11-27 06:30:54.540996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.022 [2024-11-27 06:30:54.541209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.022 [2024-11-27 06:30:54.541233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.022 [2024-11-27 06:30:54.543593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.022 [2024-11-27 06:30:54.543805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.022 [2024-11-27 06:30:54.543828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.284 [2024-11-27 06:30:54.546265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.285 [2024-11-27 06:30:54.546483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.285 [2024-11-27 06:30:54.546505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.285 [2024-11-27 06:30:54.548853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.285 [2024-11-27 06:30:54.549079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.285 [2024-11-27 06:30:54.549104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.285 [2024-11-27 06:30:54.551460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.285 [2024-11-27 06:30:54.551673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.285 [2024-11-27 06:30:54.551692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.285 [2024-11-27 06:30:54.554069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.285 [2024-11-27 06:30:54.554289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.285 [2024-11-27 06:30:54.554308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.285 [2024-11-27 06:30:54.556659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.285 [2024-11-27 06:30:54.556882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.285 [2024-11-27 06:30:54.556914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.285 [2024-11-27 06:30:54.559356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.285 [2024-11-27 06:30:54.559574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.285 [2024-11-27 06:30:54.559594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.285 [2024-11-27 06:30:54.562021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.285 [2024-11-27 06:30:54.562247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.285 [2024-11-27 06:30:54.562266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.285 [2024-11-27 06:30:54.564661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.285 [2024-11-27 06:30:54.564891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.285 [2024-11-27 06:30:54.564918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.285 [2024-11-27 06:30:54.567278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.285 [2024-11-27 06:30:54.567491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.285 [2024-11-27 06:30:54.567510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.285 [2024-11-27 06:30:54.569880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.285 [2024-11-27 06:30:54.570106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.285 [2024-11-27 06:30:54.570126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.285 [2024-11-27 06:30:54.572488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.285 [2024-11-27 06:30:54.572702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.285 [2024-11-27 06:30:54.572721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.285 [2024-11-27 06:30:54.575103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.285 [2024-11-27 06:30:54.575318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.285 [2024-11-27 06:30:54.575337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.285 [2024-11-27 06:30:54.577707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.285 [2024-11-27 06:30:54.577927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.285 [2024-11-27 06:30:54.577950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.285 [2024-11-27 06:30:54.580288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.285 [2024-11-27 06:30:54.580502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.285 [2024-11-27 06:30:54.580526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.285 [2024-11-27 06:30:54.582849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.285 [2024-11-27 06:30:54.583073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.285 [2024-11-27 06:30:54.583097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.285 [2024-11-27 06:30:54.585461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.285 [2024-11-27 06:30:54.585674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.285 [2024-11-27 06:30:54.585697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.285 [2024-11-27 06:30:54.588066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.285 [2024-11-27 06:30:54.588279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.285 [2024-11-27 06:30:54.588303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.285 [2024-11-27 06:30:54.590654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.285 [2024-11-27 06:30:54.590868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.285 [2024-11-27 06:30:54.590891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.285 [2024-11-27 06:30:54.593259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.285 [2024-11-27 06:30:54.593478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.285 [2024-11-27 06:30:54.593501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.285 [2024-11-27 06:30:54.595923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.285 [2024-11-27 06:30:54.596140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.285 [2024-11-27 06:30:54.596162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.285 [2024-11-27 06:30:54.598580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.285 [2024-11-27 06:30:54.598799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.285 [2024-11-27 06:30:54.598819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.285 [2024-11-27 06:30:54.601264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.285 [2024-11-27 06:30:54.601482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.285 [2024-11-27 06:30:54.601501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.285 [2024-11-27 06:30:54.603882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.285 [2024-11-27 06:30:54.604105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.285 [2024-11-27 06:30:54.604125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.285 [2024-11-27 06:30:54.606479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.285 [2024-11-27 06:30:54.606694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.285 [2024-11-27 06:30:54.606715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.285 [2024-11-27 06:30:54.609099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.285 [2024-11-27 06:30:54.609313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.285 [2024-11-27 06:30:54.609332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.285 [2024-11-27 06:30:54.611704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.285 [2024-11-27 06:30:54.611928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.285 [2024-11-27 06:30:54.611947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.285 [2024-11-27 06:30:54.614343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.285 [2024-11-27 06:30:54.614557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.286 [2024-11-27 06:30:54.614581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.286 [2024-11-27 06:30:54.616957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.286 [2024-11-27 06:30:54.617175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.286 [2024-11-27 06:30:54.617199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.286 [2024-11-27 06:30:54.619549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.286 [2024-11-27 06:30:54.619763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.286 [2024-11-27 06:30:54.619786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.286 [2024-11-27 06:30:54.622161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.286 [2024-11-27 06:30:54.622376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.286 [2024-11-27 06:30:54.622400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.286 [2024-11-27 06:30:54.624765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.286 [2024-11-27 06:30:54.624990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.286 [2024-11-27 06:30:54.625013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.286 [2024-11-27 06:30:54.627382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.286 [2024-11-27 06:30:54.627596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.286 [2024-11-27 06:30:54.627620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.286 [2024-11-27 06:30:54.629981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.286 [2024-11-27 06:30:54.630195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.286 [2024-11-27 06:30:54.630219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.286 [2024-11-27 06:30:54.632582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.286 [2024-11-27 06:30:54.632804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.286 [2024-11-27 06:30:54.632827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.286 [2024-11-27 06:30:54.635192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.286 [2024-11-27 06:30:54.635405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.286 [2024-11-27 06:30:54.635429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.286 [2024-11-27 06:30:54.637796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.286 [2024-11-27 06:30:54.638021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.286 [2024-11-27 06:30:54.638045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.286 [2024-11-27 06:30:54.640402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.286 [2024-11-27 06:30:54.640616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.286 [2024-11-27 06:30:54.640635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.286 [2024-11-27 06:30:54.643011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.286 [2024-11-27 06:30:54.643227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.286 [2024-11-27 06:30:54.643246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.286 [2024-11-27 06:30:54.645598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.286 [2024-11-27 06:30:54.645816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.286 [2024-11-27 06:30:54.645835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.286 [2024-11-27 06:30:54.648189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.286 [2024-11-27 06:30:54.648410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.286 [2024-11-27 06:30:54.648429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.286 [2024-11-27 06:30:54.650789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.286 [2024-11-27 06:30:54.651015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.286 [2024-11-27 06:30:54.651034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.286 [2024-11-27 06:30:54.653406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.286 [2024-11-27 06:30:54.653623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.286 [2024-11-27 06:30:54.653646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.286 [2024-11-27 06:30:54.656013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.286 [2024-11-27 06:30:54.656228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.286 [2024-11-27 06:30:54.656249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.286 [2024-11-27 06:30:54.658601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.286 [2024-11-27 06:30:54.658814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.286 [2024-11-27 06:30:54.658838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.286 [2024-11-27 06:30:54.661214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.286 [2024-11-27 06:30:54.661426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.286 [2024-11-27 06:30:54.661450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.286 [2024-11-27 06:30:54.663792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.286 [2024-11-27 06:30:54.664014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.286 [2024-11-27 06:30:54.664037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.286 [2024-11-27 06:30:54.666400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.286 [2024-11-27 06:30:54.666614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.286 [2024-11-27 06:30:54.666633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.286 [2024-11-27 06:30:54.668995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.286 [2024-11-27 06:30:54.669210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.286 [2024-11-27 06:30:54.669229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.286 [2024-11-27 06:30:54.671567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.286 [2024-11-27 06:30:54.671779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.286 [2024-11-27 06:30:54.671799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.286 [2024-11-27 06:30:54.674160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.286 [2024-11-27 06:30:54.674377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.286 [2024-11-27 06:30:54.674397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.286 [2024-11-27 06:30:54.676723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.286 [2024-11-27 06:30:54.676953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.286 [2024-11-27 06:30:54.676972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.286 [2024-11-27 06:30:54.679305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.286 [2024-11-27 06:30:54.679517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.286 [2024-11-27 06:30:54.679536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.286 [2024-11-27 06:30:54.681885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.286 [2024-11-27 06:30:54.682109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.286 [2024-11-27 06:30:54.682128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.286 [2024-11-27 06:30:54.684476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.286 [2024-11-27 06:30:54.684689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.286 [2024-11-27 06:30:54.684712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.287 [2024-11-27 06:30:54.686989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.287 [2024-11-27 06:30:54.687118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.287 [2024-11-27 06:30:54.687136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.287 [2024-11-27 06:30:54.689375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.287 [2024-11-27 06:30:54.689509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.287 [2024-11-27 06:30:54.689528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.287 [2024-11-27 06:30:54.691771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.287 [2024-11-27 06:30:54.691878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.287 [2024-11-27 06:30:54.691916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.287 [2024-11-27 06:30:54.694186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.287 [2024-11-27 06:30:54.694310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.287 [2024-11-27 06:30:54.694329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.287 [2024-11-27 06:30:54.696574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.287 [2024-11-27 06:30:54.696703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.287 [2024-11-27 06:30:54.696722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.287 [2024-11-27 06:30:54.699020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.287 [2024-11-27 06:30:54.699181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.287 [2024-11-27 06:30:54.699200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.287 [2024-11-27 06:30:54.701466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.287 [2024-11-27 06:30:54.701574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.287 [2024-11-27 06:30:54.701593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.287 [2024-11-27 06:30:54.703924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.287 [2024-11-27 06:30:54.704059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.287 [2024-11-27 06:30:54.704078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.287 [2024-11-27 06:30:54.706343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.287 [2024-11-27 06:30:54.706464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.287 [2024-11-27 06:30:54.706484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.287 [2024-11-27 06:30:54.708765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.287 [2024-11-27 06:30:54.708914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.287 [2024-11-27 06:30:54.708932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.287 [2024-11-27 06:30:54.711207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.287 [2024-11-27 06:30:54.711318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.287 [2024-11-27 06:30:54.711337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.287 [2024-11-27 06:30:54.713675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.287 [2024-11-27 06:30:54.713806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.287 [2024-11-27 06:30:54.713824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.287 [2024-11-27 06:30:54.716060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.287 [2024-11-27 06:30:54.716170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.287 [2024-11-27 06:30:54.716189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.287 [2024-11-27 06:30:54.718492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.287 [2024-11-27 06:30:54.718629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.287 [2024-11-27 06:30:54.718649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.287 [2024-11-27 06:30:54.720948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.287 [2024-11-27 06:30:54.721030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.287 [2024-11-27 06:30:54.721042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.287 [2024-11-27 06:30:54.723373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.287 [2024-11-27 06:30:54.723493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.287 [2024-11-27 06:30:54.723513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.287 [2024-11-27 06:30:54.725734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.287 [2024-11-27 06:30:54.725874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.287 [2024-11-27 06:30:54.725893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.287 [2024-11-27 06:30:54.728173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.287 [2024-11-27 06:30:54.728296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.287 [2024-11-27 06:30:54.728315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.287 [2024-11-27 06:30:54.730577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.287 [2024-11-27 06:30:54.730697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.287 [2024-11-27 06:30:54.730716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.287 [2024-11-27 06:30:54.732978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.287 [2024-11-27 06:30:54.733094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.287 [2024-11-27 06:30:54.733113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.287 [2024-11-27 06:30:54.735352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.287 [2024-11-27 06:30:54.735493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.287 [2024-11-27 06:30:54.735512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.287 [2024-11-27 06:30:54.737762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.287 [2024-11-27 06:30:54.737913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.287 [2024-11-27 06:30:54.737930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.287 [2024-11-27 06:30:54.740160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.287 [2024-11-27 06:30:54.740286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.287 [2024-11-27 06:30:54.740306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.287 [2024-11-27 06:30:54.742572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.287 [2024-11-27 06:30:54.742716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.287 [2024-11-27 06:30:54.742734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.287 [2024-11-27 06:30:54.744984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.287 [2024-11-27 06:30:54.745084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.287 [2024-11-27 06:30:54.745101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.287 [2024-11-27 06:30:54.747389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.287 [2024-11-27 06:30:54.747482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.287 [2024-11-27 06:30:54.747502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.287 [2024-11-27 06:30:54.749795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.287 [2024-11-27 06:30:54.749881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.287 [2024-11-27 06:30:54.749899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.288 [2024-11-27 06:30:54.752223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.288 [2024-11-27 06:30:54.752322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.288 [2024-11-27 06:30:54.752342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.288 [2024-11-27 06:30:54.754680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.288 [2024-11-27 06:30:54.754815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.288 [2024-11-27 06:30:54.754834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.288 [2024-11-27 06:30:54.757164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.288 [2024-11-27 06:30:54.757288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.288 [2024-11-27 06:30:54.757307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.288 [2024-11-27 06:30:54.759577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.288 [2024-11-27 06:30:54.759672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.288 [2024-11-27 06:30:54.759691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.288 [2024-11-27 06:30:54.761979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.288 [2024-11-27 06:30:54.762109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.288 [2024-11-27 06:30:54.762127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.288 [2024-11-27 06:30:54.764372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.288 [2024-11-27 06:30:54.764501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.288 [2024-11-27 06:30:54.764520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.288 [2024-11-27 06:30:54.766788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.288 [2024-11-27 06:30:54.766930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.288 [2024-11-27 06:30:54.766949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.288 [2024-11-27 06:30:54.769197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.288 [2024-11-27 06:30:54.769329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.288 [2024-11-27 06:30:54.769348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.288 [2024-11-27 06:30:54.771594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.288 [2024-11-27 06:30:54.771720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.288 [2024-11-27 06:30:54.771739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.288 [2024-11-27 06:30:54.774058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.288 [2024-11-27 06:30:54.774155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.288 [2024-11-27 06:30:54.774174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.288 [2024-11-27 06:30:54.776446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.288 [2024-11-27 06:30:54.776538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.288 [2024-11-27 06:30:54.776557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.288 [2024-11-27 06:30:54.778807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.288 [2024-11-27 06:30:54.778959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.288 [2024-11-27 06:30:54.778977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.288 [2024-11-27 06:30:54.781204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.288 [2024-11-27 06:30:54.781345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.288 [2024-11-27 06:30:54.781364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.288 [2024-11-27 06:30:54.783628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.288 [2024-11-27 06:30:54.783717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.288 [2024-11-27 06:30:54.783729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.288 [2024-11-27 06:30:54.786052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.288 [2024-11-27 06:30:54.786174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.288 [2024-11-27 06:30:54.786192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.288 [2024-11-27 06:30:54.788403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.288 [2024-11-27 06:30:54.788541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.288 [2024-11-27 06:30:54.788561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.288 [2024-11-27 06:30:54.790827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.288 [2024-11-27 06:30:54.790974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.288 [2024-11-27 06:30:54.790993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.288 [2024-11-27 06:30:54.793271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.288 [2024-11-27 06:30:54.793395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.288 [2024-11-27 06:30:54.793414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.288 [2024-11-27 06:30:54.795644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.288 [2024-11-27 06:30:54.795768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.288 [2024-11-27 06:30:54.795787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.288 [2024-11-27 06:30:54.798052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.288 [2024-11-27 06:30:54.798183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.288 [2024-11-27 06:30:54.798202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.288 [2024-11-27 06:30:54.800411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.288 [2024-11-27 06:30:54.800562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.288 [2024-11-27 06:30:54.800581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.288 [2024-11-27 06:30:54.802860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.288 [2024-11-27 06:30:54.802996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.288 [2024-11-27 06:30:54.803014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.288 [2024-11-27 06:30:54.805263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.288 [2024-11-27 06:30:54.805397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.288 [2024-11-27 06:30:54.805416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.288 [2024-11-27 06:30:54.807659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.288 [2024-11-27 06:30:54.807762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.288 [2024-11-27 06:30:54.807780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.288 [2024-11-27 06:30:54.810073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.288 [2024-11-27 06:30:54.810166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.288 [2024-11-27 06:30:54.810188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.552 [2024-11-27 06:30:54.812463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.552 [2024-11-27 06:30:54.812562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.552 [2024-11-27 06:30:54.812581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.552 [2024-11-27 06:30:54.814862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.552 [2024-11-27 06:30:54.815014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.552 [2024-11-27 06:30:54.815033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.552 [2024-11-27 06:30:54.817271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.552 [2024-11-27 06:30:54.817415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.552 [2024-11-27 06:30:54.817434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.552 [2024-11-27 06:30:54.819657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.552 [2024-11-27 06:30:54.819803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.552 [2024-11-27 06:30:54.819823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.552 [2024-11-27 06:30:54.822088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.552 [2024-11-27 06:30:54.822191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.552 [2024-11-27 06:30:54.822210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.552 [2024-11-27 06:30:54.824487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.552 [2024-11-27 06:30:54.824593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.552 [2024-11-27 06:30:54.824613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.552 [2024-11-27 06:30:54.826922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.552 [2024-11-27 06:30:54.827026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.552 [2024-11-27 06:30:54.827045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.552 [2024-11-27 06:30:54.829358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.552 [2024-11-27 06:30:54.829458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.552 [2024-11-27 06:30:54.829477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.552 [2024-11-27 06:30:54.831777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.552 [2024-11-27 06:30:54.831920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.552 [2024-11-27 06:30:54.831938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.552 [2024-11-27 06:30:54.834204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.552 [2024-11-27 06:30:54.834320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.552 [2024-11-27 06:30:54.834340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.552 [2024-11-27 06:30:54.836585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.552 [2024-11-27 06:30:54.836738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.552 [2024-11-27 06:30:54.836756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.552 [2024-11-27 06:30:54.839001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.552 [2024-11-27 06:30:54.839127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.552 [2024-11-27 06:30:54.839145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.552 [2024-11-27 06:30:54.841368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.552 [2024-11-27 06:30:54.841513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.552 [2024-11-27 06:30:54.841532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.552 [2024-11-27 06:30:54.843771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.552 [2024-11-27 06:30:54.843896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.552 [2024-11-27 06:30:54.843924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.552 [2024-11-27 06:30:54.846191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.552 [2024-11-27 06:30:54.846291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.552 [2024-11-27 06:30:54.846304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.552 [2024-11-27 06:30:54.848598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.552 [2024-11-27 06:30:54.848694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.552 [2024-11-27 06:30:54.848713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.552 [2024-11-27 06:30:54.851062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.552 [2024-11-27 06:30:54.851158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.552 [2024-11-27 06:30:54.851171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.552 [2024-11-27 06:30:54.853496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.552 [2024-11-27 06:30:54.853585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.552 [2024-11-27 06:30:54.853598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.552 [2024-11-27 06:30:54.855911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.552 [2024-11-27 06:30:54.856009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.552 [2024-11-27 06:30:54.856027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.552 [2024-11-27 06:30:54.858311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.552 [2024-11-27 06:30:54.858411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.552 [2024-11-27 06:30:54.858430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.552 [2024-11-27 06:30:54.860678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.552 [2024-11-27 06:30:54.860816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.552 [2024-11-27 06:30:54.860835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.552 [2024-11-27 06:30:54.863137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.552 [2024-11-27 06:30:54.863260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.552 [2024-11-27 06:30:54.863279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.552 [2024-11-27 06:30:54.865520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.552 [2024-11-27 06:30:54.865617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.552 [2024-11-27 06:30:54.865636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.552 11538.00 IOPS, 1442.25 MiB/s [2024-11-27T06:30:55.082Z] [2024-11-27 06:30:54.868824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.552 [2024-11-27 06:30:54.868918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.552 [2024-11-27 06:30:54.868931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.552 [2024-11-27 06:30:54.871219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.552 [2024-11-27 06:30:54.871335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.552 [2024-11-27 06:30:54.871360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.552 [2024-11-27 06:30:54.873603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.553 [2024-11-27 06:30:54.873708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.553 [2024-11-27 06:30:54.873720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.553 [2024-11-27 06:30:54.876012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.553 [2024-11-27 06:30:54.876124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.553 [2024-11-27 06:30:54.876137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.553 [2024-11-27 06:30:54.878458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.553 [2024-11-27 06:30:54.878547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.553 [2024-11-27 06:30:54.878559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.553 [2024-11-27 06:30:54.880886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.553 [2024-11-27 06:30:54.880975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.553 [2024-11-27 06:30:54.880991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.553 [2024-11-27 06:30:54.883312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.553 [2024-11-27 06:30:54.883410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.553 [2024-11-27 06:30:54.883429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.553 [2024-11-27 06:30:54.885701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.553 [2024-11-27 06:30:54.885818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.553 [2024-11-27 06:30:54.885838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.553 [2024-11-27 06:30:54.888130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.553 [2024-11-27 06:30:54.888225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.553 [2024-11-27 06:30:54.888244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.553 [2024-11-27 06:30:54.890526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.553 [2024-11-27 06:30:54.890624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.553 [2024-11-27 06:30:54.890646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.553 [2024-11-27 06:30:54.892960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.553 [2024-11-27 06:30:54.893055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.553 [2024-11-27 06:30:54.893071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.553 [2024-11-27 06:30:54.895360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.553 [2024-11-27 06:30:54.895475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.553 [2024-11-27 06:30:54.895494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.553 [2024-11-27 06:30:54.897787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.553 [2024-11-27 06:30:54.897883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.553 [2024-11-27 06:30:54.897913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.553 [2024-11-27 06:30:54.900204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.553 [2024-11-27 06:30:54.900301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.553 [2024-11-27 06:30:54.900320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.553 [2024-11-27 06:30:54.902602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.553 [2024-11-27 06:30:54.902720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.553 [2024-11-27 06:30:54.902740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.553 [2024-11-27 06:30:54.905036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.553 [2024-11-27 06:30:54.905106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.553 [2024-11-27 06:30:54.905119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.553 [2024-11-27 06:30:54.907449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.553 [2024-11-27 06:30:54.907539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.553 [2024-11-27 06:30:54.907560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.553 [2024-11-27 06:30:54.909873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.553 [2024-11-27 06:30:54.909990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.553 [2024-11-27 06:30:54.910009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.553 [2024-11-27 06:30:54.912295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.553 [2024-11-27 06:30:54.912400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.553 [2024-11-27 06:30:54.912419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.553 [2024-11-27 06:30:54.914709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.553 [2024-11-27 06:30:54.914804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.553 [2024-11-27 06:30:54.914826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.553 [2024-11-27 06:30:54.917177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.553 [2024-11-27 06:30:54.917259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.553 [2024-11-27 06:30:54.917271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.553 [2024-11-27 06:30:54.919570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.553 [2024-11-27 06:30:54.919663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.553 [2024-11-27 06:30:54.919683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.553 [2024-11-27 06:30:54.922013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.553 [2024-11-27 06:30:54.922086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.553 [2024-11-27 06:30:54.922098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.553 [2024-11-27 06:30:54.924411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.553 [2024-11-27 06:30:54.924516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.553 [2024-11-27 06:30:54.924536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.553 [2024-11-27 06:30:54.926865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.553 [2024-11-27 06:30:54.926958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.553 [2024-11-27 06:30:54.926970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.553 [2024-11-27 06:30:54.929276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.553 [2024-11-27 06:30:54.929359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.553 [2024-11-27 06:30:54.929371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.553 [2024-11-27 06:30:54.931655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.553 [2024-11-27 06:30:54.931736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.553 [2024-11-27 06:30:54.931748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.553 [2024-11-27 06:30:54.934046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.553 [2024-11-27 06:30:54.934148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.553 [2024-11-27 06:30:54.934161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.553 [2024-11-27 06:30:54.936447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.553 [2024-11-27 06:30:54.936552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.553 [2024-11-27 06:30:54.936572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.554 [2024-11-27 06:30:54.938896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.554 [2024-11-27 06:30:54.938981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-11-27 06:30:54.938993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.554 [2024-11-27 06:30:54.941326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.554 [2024-11-27 06:30:54.941400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-11-27 06:30:54.941412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.554 [2024-11-27 06:30:54.943759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.554 [2024-11-27 06:30:54.943854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-11-27 06:30:54.943896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.554 [2024-11-27 06:30:54.946196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.554 [2024-11-27 06:30:54.946300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-11-27 06:30:54.946319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.554 [2024-11-27 06:30:54.948613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.554 [2024-11-27 06:30:54.948716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-11-27 06:30:54.948746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.554 [2024-11-27 06:30:54.951046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.554 [2024-11-27 06:30:54.951133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-11-27 06:30:54.951147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.554 [2024-11-27 06:30:54.953486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.554 [2024-11-27 06:30:54.953562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-11-27 06:30:54.953575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.554 [2024-11-27 06:30:54.955915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.554 [2024-11-27 06:30:54.955992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-11-27 06:30:54.956004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.554 [2024-11-27 06:30:54.958342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.554 [2024-11-27 06:30:54.958435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-11-27 06:30:54.958457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.554 [2024-11-27 06:30:54.960787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.554 [2024-11-27 06:30:54.960858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-11-27 06:30:54.960871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.554 [2024-11-27 06:30:54.963204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.554 [2024-11-27 06:30:54.963301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-11-27 06:30:54.963321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.554 [2024-11-27 06:30:54.965613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.554 [2024-11-27 06:30:54.965709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-11-27 06:30:54.965729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.554 [2024-11-27 06:30:54.968047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.554 [2024-11-27 06:30:54.968137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-11-27 06:30:54.968156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.554 [2024-11-27 06:30:54.970441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.554 [2024-11-27 06:30:54.970537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-11-27 06:30:54.970557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.554 [2024-11-27 06:30:54.972816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.554 [2024-11-27 06:30:54.972899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-11-27 06:30:54.972921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.554 [2024-11-27 06:30:54.975231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.554 [2024-11-27 06:30:54.975324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-11-27 06:30:54.975344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.554 [2024-11-27 06:30:54.977656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.554 [2024-11-27 06:30:54.977759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-11-27 06:30:54.977779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.554 [2024-11-27 06:30:54.980108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.554 [2024-11-27 06:30:54.980192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-11-27 06:30:54.980204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.554 [2024-11-27 06:30:54.982510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.554 [2024-11-27 06:30:54.982611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-11-27 06:30:54.982631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.554 [2024-11-27 06:30:54.984935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.554 [2024-11-27 06:30:54.985030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-11-27 06:30:54.985048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.554 [2024-11-27 06:30:54.987364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.554 [2024-11-27 06:30:54.987446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-11-27 06:30:54.987464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.554 [2024-11-27 06:30:54.989801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.554 [2024-11-27 06:30:54.989877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-11-27 06:30:54.989890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.554 [2024-11-27 06:30:54.992223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.554 [2024-11-27 06:30:54.992335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-11-27 06:30:54.992354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.554 [2024-11-27 06:30:54.994624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.554 [2024-11-27 06:30:54.994756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-11-27 06:30:54.994775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.554 [2024-11-27 06:30:54.997084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.554 [2024-11-27 06:30:54.997181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-11-27 06:30:54.997198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.554 [2024-11-27 06:30:54.999450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.554 [2024-11-27 06:30:54.999583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-11-27 06:30:54.999602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.554 [2024-11-27 06:30:55.001870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.554 [2024-11-27 06:30:55.001995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.554 [2024-11-27 06:30:55.002015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.554 [2024-11-27 06:30:55.004302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.555 [2024-11-27 06:30:55.004411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.555 [2024-11-27 06:30:55.004431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.555 [2024-11-27 06:30:55.006742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.555 [2024-11-27 06:30:55.006819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.555 [2024-11-27 06:30:55.006831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.555 [2024-11-27 06:30:55.009170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.555 [2024-11-27 06:30:55.009266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.555 [2024-11-27 06:30:55.009286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.555 [2024-11-27 06:30:55.011572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.555 [2024-11-27 06:30:55.011648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.555 [2024-11-27 06:30:55.011661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.555 [2024-11-27 06:30:55.014001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.555 [2024-11-27 06:30:55.014080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.555 [2024-11-27 06:30:55.014093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.555 [2024-11-27 06:30:55.016411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.555 [2024-11-27 06:30:55.016494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.555 [2024-11-27 06:30:55.016512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.555 [2024-11-27 06:30:55.018831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.555 [2024-11-27 06:30:55.018922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.555 [2024-11-27 06:30:55.018935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.555 [2024-11-27 06:30:55.021291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.555 [2024-11-27 06:30:55.021366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.555 [2024-11-27 06:30:55.021379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.555 [2024-11-27 06:30:55.023677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.555 [2024-11-27 06:30:55.023768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.555 [2024-11-27 06:30:55.023790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.555 [2024-11-27 06:30:55.026116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.555 [2024-11-27 06:30:55.026199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.555 [2024-11-27 06:30:55.026211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.555 [2024-11-27 06:30:55.028536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.555 [2024-11-27 06:30:55.028627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.555 [2024-11-27 06:30:55.028639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.555 [2024-11-27 06:30:55.030968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.555 [2024-11-27 06:30:55.031047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.555 [2024-11-27 06:30:55.031059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.555 [2024-11-27 06:30:55.033411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.555 [2024-11-27 06:30:55.033508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.555 [2024-11-27 06:30:55.033528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.555 [2024-11-27 06:30:55.035782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.555 [2024-11-27 06:30:55.035926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.555 [2024-11-27 06:30:55.035944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.555 [2024-11-27 06:30:55.038246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.555 [2024-11-27 06:30:55.038358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.555 [2024-11-27 06:30:55.038377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.555 [2024-11-27 06:30:55.040641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.555 [2024-11-27 06:30:55.040791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.555 [2024-11-27 06:30:55.040810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.555 [2024-11-27 06:30:55.043063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.555 [2024-11-27 06:30:55.043159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.555 [2024-11-27 06:30:55.043178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.555 [2024-11-27 06:30:55.045479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.555 [2024-11-27 06:30:55.045587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.555 [2024-11-27 06:30:55.045600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.555 [2024-11-27 06:30:55.047874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.555 [2024-11-27 06:30:55.047981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.555 [2024-11-27 06:30:55.048001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.555 [2024-11-27 06:30:55.050299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.555 [2024-11-27 06:30:55.050408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.555 [2024-11-27 06:30:55.050427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.555 [2024-11-27 06:30:55.052742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.555 [2024-11-27 06:30:55.052861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.555 [2024-11-27 06:30:55.052880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.555 [2024-11-27 06:30:55.055143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.555 [2024-11-27 06:30:55.055247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.555 [2024-11-27 06:30:55.055260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.555 [2024-11-27 06:30:55.057578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.555 [2024-11-27 06:30:55.057687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.555 [2024-11-27 06:30:55.057706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.555 [2024-11-27 06:30:55.059967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.555 [2024-11-27 06:30:55.060079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.555 [2024-11-27 06:30:55.060098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.555 [2024-11-27 06:30:55.062370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.555 [2024-11-27 06:30:55.062502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.555 [2024-11-27 06:30:55.062521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.555 [2024-11-27 06:30:55.064791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.555 [2024-11-27 06:30:55.064901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.555 [2024-11-27 06:30:55.064927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.555 [2024-11-27 06:30:55.067171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.555 [2024-11-27 06:30:55.067281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.555 [2024-11-27 06:30:55.067300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.555 [2024-11-27 06:30:55.069621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.555 [2024-11-27 06:30:55.069733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.556 [2024-11-27 06:30:55.069753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.556 [2024-11-27 06:30:55.072063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.556 [2024-11-27 06:30:55.072139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.556 [2024-11-27 06:30:55.072152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.556 [2024-11-27 06:30:55.074471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.556 [2024-11-27 06:30:55.074585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.556 [2024-11-27 06:30:55.074605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.556 [2024-11-27 06:30:55.076940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.556 [2024-11-27 06:30:55.077036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.556 [2024-11-27 06:30:55.077049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.819 [2024-11-27 06:30:55.079334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.819 [2024-11-27 06:30:55.079437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.819 [2024-11-27 06:30:55.079456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.819 [2024-11-27 06:30:55.081759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.819 [2024-11-27 06:30:55.081858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.819 [2024-11-27 06:30:55.081878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.819 [2024-11-27 06:30:55.084191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.819 [2024-11-27 06:30:55.084264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.819 [2024-11-27 06:30:55.084276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.819 [2024-11-27 06:30:55.086595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.819 [2024-11-27 06:30:55.086686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.819 [2024-11-27 06:30:55.086699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.819 [2024-11-27 06:30:55.089050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.819 [2024-11-27 06:30:55.089125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.819 [2024-11-27 06:30:55.089138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.819 [2024-11-27 06:30:55.091412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.819 [2024-11-27 06:30:55.091518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.819 [2024-11-27 06:30:55.091539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.819 [2024-11-27 06:30:55.093839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.819 [2024-11-27 06:30:55.093956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.820 [2024-11-27 06:30:55.093975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.820 [2024-11-27 06:30:55.096289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.820 [2024-11-27 06:30:55.096391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.820 [2024-11-27 06:30:55.096411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.820 [2024-11-27 06:30:55.098711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.820 [2024-11-27 06:30:55.098824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.820 [2024-11-27 06:30:55.098844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.820 [2024-11-27 06:30:55.101164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.820 [2024-11-27 06:30:55.101239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.820 [2024-11-27 06:30:55.101251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.820 [2024-11-27 06:30:55.103585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.820 [2024-11-27 06:30:55.103678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.820 [2024-11-27 06:30:55.103691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.820 [2024-11-27 06:30:55.106039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.820 [2024-11-27 06:30:55.106114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.820 [2024-11-27 06:30:55.106127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.820 [2024-11-27 06:30:55.108461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.820 [2024-11-27 06:30:55.108541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.820 [2024-11-27 06:30:55.108553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.820 [2024-11-27 06:30:55.110843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.820 [2024-11-27 06:30:55.110969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.820 [2024-11-27 06:30:55.110988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.820 [2024-11-27 06:30:55.113275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.820 [2024-11-27 06:30:55.113408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.820 [2024-11-27 06:30:55.113428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.820 [2024-11-27 06:30:55.115702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.820 [2024-11-27 06:30:55.115819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.820 [2024-11-27 06:30:55.115839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.820 [2024-11-27 06:30:55.118185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.820 [2024-11-27 06:30:55.118282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.820 [2024-11-27 06:30:55.118301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.820 [2024-11-27 06:30:55.120598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.820 [2024-11-27 06:30:55.120693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.820 [2024-11-27 06:30:55.120713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.820 [2024-11-27 06:30:55.123003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.820 [2024-11-27 06:30:55.123135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.820 [2024-11-27 06:30:55.123154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.820 [2024-11-27 06:30:55.125444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.820 [2024-11-27 06:30:55.125546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.820 [2024-11-27 06:30:55.125559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.820 [2024-11-27 06:30:55.127870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.820 [2024-11-27 06:30:55.127961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.820 [2024-11-27 06:30:55.127974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.820 [2024-11-27 06:30:55.130336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.820 [2024-11-27 06:30:55.130436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.820 [2024-11-27 06:30:55.130455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.820 [2024-11-27 06:30:55.132718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.820 [2024-11-27 06:30:55.132848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.820 [2024-11-27 06:30:55.132867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.820 [2024-11-27 06:30:55.135153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.820 [2024-11-27 06:30:55.135251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.820 [2024-11-27 06:30:55.135270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.820 [2024-11-27 06:30:55.137580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.820 [2024-11-27 06:30:55.137701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.820 [2024-11-27 06:30:55.137721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.820 [2024-11-27 06:30:55.140028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.820 [2024-11-27 06:30:55.140124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.820 [2024-11-27 06:30:55.140137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.820 [2024-11-27 06:30:55.142429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.820 [2024-11-27 06:30:55.142545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.820 [2024-11-27 06:30:55.142564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.820 [2024-11-27 06:30:55.144844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.820 [2024-11-27 06:30:55.144958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.820 [2024-11-27 06:30:55.144977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.820 [2024-11-27 06:30:55.147263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.820 [2024-11-27 06:30:55.147391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.820 [2024-11-27 06:30:55.147410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.820 [2024-11-27 06:30:55.149671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.820 [2024-11-27 06:30:55.149770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.820 [2024-11-27 06:30:55.149790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.820 [2024-11-27 06:30:55.152078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.820 [2024-11-27 06:30:55.152173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.820 [2024-11-27 06:30:55.152190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.820 [2024-11-27 06:30:55.154509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.820 [2024-11-27 06:30:55.154601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.820 [2024-11-27 06:30:55.154614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.820 [2024-11-27 06:30:55.156945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.820 [2024-11-27 06:30:55.157023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.820 [2024-11-27 06:30:55.157036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.820 [2024-11-27 06:30:55.159379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.820 [2024-11-27 06:30:55.159457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.821 [2024-11-27 06:30:55.159470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.821 [2024-11-27 06:30:55.161788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.821 [2024-11-27 06:30:55.161866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.821 [2024-11-27 06:30:55.161878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.821 [2024-11-27 06:30:55.164226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.821 [2024-11-27 06:30:55.164311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.821 [2024-11-27 06:30:55.164323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.821 [2024-11-27 06:30:55.166629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.821 [2024-11-27 06:30:55.166738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.821 [2024-11-27 06:30:55.166758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.821 [2024-11-27 06:30:55.169046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.821 [2024-11-27 06:30:55.169143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.821 [2024-11-27 06:30:55.169159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.821 [2024-11-27 06:30:55.171434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.821 [2024-11-27 06:30:55.171510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.821 [2024-11-27 06:30:55.171522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.821 [2024-11-27 06:30:55.173851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.821 [2024-11-27 06:30:55.173946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.821 [2024-11-27 06:30:55.173962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.821 [2024-11-27 06:30:55.176266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.821 [2024-11-27 06:30:55.176347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.821 [2024-11-27 06:30:55.176360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.821 [2024-11-27 06:30:55.178686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.821 [2024-11-27 06:30:55.178764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.821 [2024-11-27 06:30:55.178776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.821 [2024-11-27 06:30:55.181122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.821 [2024-11-27 06:30:55.181213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.821 [2024-11-27 06:30:55.181225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.821 [2024-11-27 06:30:55.183554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.821 [2024-11-27 06:30:55.183633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.821 [2024-11-27 06:30:55.183645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.821 [2024-11-27 06:30:55.185968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.821 [2024-11-27 06:30:55.186060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.821 [2024-11-27 06:30:55.186083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.821 [2024-11-27 06:30:55.188377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.821 [2024-11-27 06:30:55.188454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.821 [2024-11-27 06:30:55.188467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.821 [2024-11-27 06:30:55.190813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.821 [2024-11-27 06:30:55.190892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.821 [2024-11-27 06:30:55.190922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.821 [2024-11-27 06:30:55.193217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.821 [2024-11-27 06:30:55.193316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.821 [2024-11-27 06:30:55.193336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.821 [2024-11-27 06:30:55.195620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.821 [2024-11-27 06:30:55.195716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.821 [2024-11-27 06:30:55.195735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.821 [2024-11-27 06:30:55.198044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.821 [2024-11-27 06:30:55.198137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.821 [2024-11-27 06:30:55.198157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.821 [2024-11-27 06:30:55.200451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.821 [2024-11-27 06:30:55.200532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.821 [2024-11-27 06:30:55.200545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.821 [2024-11-27 06:30:55.202870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.821 [2024-11-27 06:30:55.202958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.821 [2024-11-27 06:30:55.202971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.821 [2024-11-27 06:30:55.205277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.821 [2024-11-27 06:30:55.205369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.821 [2024-11-27 06:30:55.205389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.821 [2024-11-27 06:30:55.207702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.821 [2024-11-27 06:30:55.207798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.821 [2024-11-27 06:30:55.207818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.821 [2024-11-27 06:30:55.210137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.821 [2024-11-27 06:30:55.210231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.821 [2024-11-27 06:30:55.210243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.821 [2024-11-27 06:30:55.212533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.821 [2024-11-27 06:30:55.212612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.821 [2024-11-27 06:30:55.212625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.821 [2024-11-27 06:30:55.214970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.821 [2024-11-27 06:30:55.215072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.821 [2024-11-27 06:30:55.215091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.821 [2024-11-27 06:30:55.217386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.821 [2024-11-27 06:30:55.217482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.821 [2024-11-27 06:30:55.217502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.821 [2024-11-27 06:30:55.219794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.821 [2024-11-27 06:30:55.219871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.821 [2024-11-27 06:30:55.219884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.821 [2024-11-27 06:30:55.222274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.821 [2024-11-27 06:30:55.222374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.821 [2024-11-27 06:30:55.222394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.821 [2024-11-27 06:30:55.224683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.821 [2024-11-27 06:30:55.224789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.822 [2024-11-27 06:30:55.224806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.822 [2024-11-27 06:30:55.227142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.822 [2024-11-27 06:30:55.227220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.822 [2024-11-27 06:30:55.227232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.822 [2024-11-27 06:30:55.229548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.822 [2024-11-27 06:30:55.229639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.822 [2024-11-27 06:30:55.229656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.822 [2024-11-27 06:30:55.231974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.822 [2024-11-27 06:30:55.232048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.822 [2024-11-27 06:30:55.232061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.822 [2024-11-27 06:30:55.234373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.822 [2024-11-27 06:30:55.234455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.822 [2024-11-27 06:30:55.234468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.822 [2024-11-27 06:30:55.236780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.822 [2024-11-27 06:30:55.236863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.822 [2024-11-27 06:30:55.236875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.822 [2024-11-27 06:30:55.239208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.822 [2024-11-27 06:30:55.239282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.822 [2024-11-27 06:30:55.239295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.822 [2024-11-27 06:30:55.241611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.822 [2024-11-27 06:30:55.241686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.822 [2024-11-27 06:30:55.241699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.822 [2024-11-27 06:30:55.244024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.822 [2024-11-27 06:30:55.244109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.822 [2024-11-27 06:30:55.244132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.822 [2024-11-27 06:30:55.246414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.822 [2024-11-27 06:30:55.246505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.822 [2024-11-27 06:30:55.246519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.822 [2024-11-27 06:30:55.248859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.822 [2024-11-27 06:30:55.248965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.822 [2024-11-27 06:30:55.248983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.822 [2024-11-27 06:30:55.251351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.822 [2024-11-27 06:30:55.251457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.822 [2024-11-27 06:30:55.251482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.822 [2024-11-27 06:30:55.253786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.822 [2024-11-27 06:30:55.253872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.822 [2024-11-27 06:30:55.253886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.822 [2024-11-27 06:30:55.256208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.822 [2024-11-27 06:30:55.256286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.822 [2024-11-27 06:30:55.256298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.822 [2024-11-27 06:30:55.258632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.822 [2024-11-27 06:30:55.258711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.822 [2024-11-27 06:30:55.258724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.822 [2024-11-27 06:30:55.261074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.822 [2024-11-27 06:30:55.261171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.822 [2024-11-27 06:30:55.261188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.822 [2024-11-27 06:30:55.263496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.822 [2024-11-27 06:30:55.263572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.822 [2024-11-27 06:30:55.263585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.822 [2024-11-27 06:30:55.266008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.822 [2024-11-27 06:30:55.266094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.822 [2024-11-27 06:30:55.266112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.822 [2024-11-27 06:30:55.268464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.822 [2024-11-27 06:30:55.268563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.822 [2024-11-27 06:30:55.268582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.822 [2024-11-27 06:30:55.270882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.822 [2024-11-27 06:30:55.270988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.822 [2024-11-27 06:30:55.271001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.822 [2024-11-27 06:30:55.273288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.822 [2024-11-27 06:30:55.273395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.822 [2024-11-27 06:30:55.273415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.822 [2024-11-27 06:30:55.275709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.822 [2024-11-27 06:30:55.275805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.822 [2024-11-27 06:30:55.275825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.822 [2024-11-27 06:30:55.278117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.822 [2024-11-27 06:30:55.278213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.822 [2024-11-27 06:30:55.278226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.822 [2024-11-27 06:30:55.280544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.822 [2024-11-27 06:30:55.280618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.822 [2024-11-27 06:30:55.280631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.822 [2024-11-27 06:30:55.282941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.822 [2024-11-27 06:30:55.283034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.822 [2024-11-27 06:30:55.283055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.822 [2024-11-27 06:30:55.285377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.822 [2024-11-27 06:30:55.285457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.822 [2024-11-27 06:30:55.285469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.822 [2024-11-27 06:30:55.287800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.822 [2024-11-27 06:30:55.287876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.822 [2024-11-27 06:30:55.287889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.823 [2024-11-27 06:30:55.290241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.823 [2024-11-27 06:30:55.290338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.823 [2024-11-27 06:30:55.290357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.823 [2024-11-27 06:30:55.292686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.823 [2024-11-27 06:30:55.292769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.823 [2024-11-27 06:30:55.292782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.823 [2024-11-27 06:30:55.295117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.823 [2024-11-27 06:30:55.295235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.823 [2024-11-27 06:30:55.295254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.823 [2024-11-27 06:30:55.297540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.823 [2024-11-27 06:30:55.297643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.823 [2024-11-27 06:30:55.297663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.823 [2024-11-27 06:30:55.299994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.823 [2024-11-27 06:30:55.300082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.823 [2024-11-27 06:30:55.300103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.823 [2024-11-27 06:30:55.302403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.823 [2024-11-27 06:30:55.302511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.823 [2024-11-27 06:30:55.302533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.823 [2024-11-27 06:30:55.304928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.823 [2024-11-27 06:30:55.305030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.823 [2024-11-27 06:30:55.305050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.823 [2024-11-27 06:30:55.307379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.823 [2024-11-27 06:30:55.307485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.823 [2024-11-27 06:30:55.307505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.823 [2024-11-27 06:30:55.309814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.823 [2024-11-27 06:30:55.309890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.823 [2024-11-27 06:30:55.309915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.823 [2024-11-27 06:30:55.312248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.823 [2024-11-27 06:30:55.312340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.823 [2024-11-27 06:30:55.312360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.823 [2024-11-27 06:30:55.314724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.823 [2024-11-27 06:30:55.314820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.823 [2024-11-27 06:30:55.314840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.823 [2024-11-27 06:30:55.317248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.823 [2024-11-27 06:30:55.317332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.823 [2024-11-27 06:30:55.317345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.823 [2024-11-27 06:30:55.319787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.823 [2024-11-27 06:30:55.319868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.823 [2024-11-27 06:30:55.319880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.823 [2024-11-27 06:30:55.322338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.823 [2024-11-27 06:30:55.322438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.823 [2024-11-27 06:30:55.322459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.823 [2024-11-27 06:30:55.324895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.823 [2024-11-27 06:30:55.325016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.823 [2024-11-27 06:30:55.325035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.823 [2024-11-27 06:30:55.327396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.823 [2024-11-27 06:30:55.327529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.823 [2024-11-27 06:30:55.327549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.823 [2024-11-27 06:30:55.329837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.823 [2024-11-27 06:30:55.329957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.823 [2024-11-27 06:30:55.329976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.823 [2024-11-27 06:30:55.332273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.823 [2024-11-27 06:30:55.332353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.823 [2024-11-27 06:30:55.332365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.823 [2024-11-27 06:30:55.334679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.823 [2024-11-27 06:30:55.334761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.823 [2024-11-27 06:30:55.334774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.823 [2024-11-27 06:30:55.337091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.823 [2024-11-27 06:30:55.337188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.823 [2024-11-27 06:30:55.337206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.823 [2024-11-27 06:30:55.339514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.823 [2024-11-27 06:30:55.339599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.823 [2024-11-27 06:30:55.339612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.823 [2024-11-27 06:30:55.341876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.823 [2024-11-27 06:30:55.341995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.823 [2024-11-27 06:30:55.342015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.823 [2024-11-27 06:30:55.344273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:35.823 [2024-11-27 06:30:55.344351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.823 [2024-11-27 06:30:55.344364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.087 [2024-11-27 06:30:55.346684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.087 [2024-11-27 06:30:55.346776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.087 [2024-11-27 06:30:55.346789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.087 [2024-11-27 06:30:55.349107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.087 [2024-11-27 06:30:55.349204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.087 [2024-11-27 06:30:55.349220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.087 [2024-11-27 06:30:55.351478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.087 [2024-11-27 06:30:55.351577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.087 [2024-11-27 06:30:55.351597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.087 [2024-11-27 06:30:55.353847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.087 [2024-11-27 06:30:55.353973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.087 [2024-11-27 06:30:55.353986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.087 [2024-11-27 06:30:55.356300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.087 [2024-11-27 06:30:55.356392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.087 [2024-11-27 06:30:55.356411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.087 [2024-11-27 06:30:55.358695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.087 [2024-11-27 06:30:55.358806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.087 [2024-11-27 06:30:55.358825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.087 [2024-11-27 06:30:55.361130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.087 [2024-11-27 06:30:55.361241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.087 [2024-11-27 06:30:55.361260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.087 [2024-11-27 06:30:55.363524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.087 [2024-11-27 06:30:55.363618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.087 [2024-11-27 06:30:55.363638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.087 [2024-11-27 06:30:55.365933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.087 [2024-11-27 06:30:55.366039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.087 [2024-11-27 06:30:55.366058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.087 [2024-11-27 06:30:55.368350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.087 [2024-11-27 06:30:55.368448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.087 [2024-11-27 06:30:55.368467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.088 [2024-11-27 06:30:55.370742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.088 [2024-11-27 06:30:55.370850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.088 [2024-11-27 06:30:55.370882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.088 [2024-11-27 06:30:55.373163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.088 [2024-11-27 06:30:55.373246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.088 [2024-11-27 06:30:55.373259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.088 [2024-11-27 06:30:55.375579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.088 [2024-11-27 06:30:55.375655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.088 [2024-11-27 06:30:55.375667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.088 [2024-11-27 06:30:55.377963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.088 [2024-11-27 06:30:55.378091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.088 [2024-11-27 06:30:55.378109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.088 [2024-11-27 06:30:55.380358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.088 [2024-11-27 06:30:55.380479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.088 [2024-11-27 06:30:55.380499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.088 [2024-11-27 06:30:55.382780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.088 [2024-11-27 06:30:55.382892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.088 [2024-11-27 06:30:55.382921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.088 [2024-11-27 06:30:55.385221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.088 [2024-11-27 06:30:55.385316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.088 [2024-11-27 06:30:55.385335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.088 [2024-11-27 06:30:55.387643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.088 [2024-11-27 06:30:55.387718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.088 [2024-11-27 06:30:55.387730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.088 [2024-11-27 06:30:55.390074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.088 [2024-11-27 06:30:55.390154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.088 [2024-11-27 06:30:55.390167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.088 [2024-11-27 06:30:55.392455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.088 [2024-11-27 06:30:55.392548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.088 [2024-11-27 06:30:55.392567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.088 [2024-11-27 06:30:55.394869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.088 [2024-11-27 06:30:55.394961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.088 [2024-11-27 06:30:55.394974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.088 [2024-11-27 06:30:55.397295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.088 [2024-11-27 06:30:55.397386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.088 [2024-11-27 06:30:55.397400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.088 [2024-11-27 06:30:55.399690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.088 [2024-11-27 06:30:55.399773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.088 [2024-11-27 06:30:55.399786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.088 [2024-11-27 06:30:55.402132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.088 [2024-11-27 06:30:55.402223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.088 [2024-11-27 06:30:55.402245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.088 [2024-11-27 06:30:55.404532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.088 [2024-11-27 06:30:55.404652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.088 [2024-11-27 06:30:55.404672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.088 [2024-11-27 06:30:55.406989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.088 [2024-11-27 06:30:55.407084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.088 [2024-11-27 06:30:55.407104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.088 [2024-11-27 06:30:55.409404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.088 [2024-11-27 06:30:55.409491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.088 [2024-11-27 06:30:55.409504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.088 [2024-11-27 06:30:55.411766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.088 [2024-11-27 06:30:55.411897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.088 [2024-11-27 06:30:55.411926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.088 [2024-11-27 06:30:55.414185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.088 [2024-11-27 06:30:55.414286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.088 [2024-11-27 06:30:55.414306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.088 [2024-11-27 06:30:55.416615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.088 [2024-11-27 06:30:55.416714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.088 [2024-11-27 06:30:55.416745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.088 [2024-11-27 06:30:55.419010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.088 [2024-11-27 06:30:55.419097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.088 [2024-11-27 06:30:55.419111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.088 [2024-11-27 06:30:55.421447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.088 [2024-11-27 06:30:55.421522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.088 [2024-11-27 06:30:55.421535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.088 [2024-11-27 06:30:55.423840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.088 [2024-11-27 06:30:55.423933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.088 [2024-11-27 06:30:55.423945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.088 [2024-11-27 06:30:55.426291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.088 [2024-11-27 06:30:55.426372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.088 [2024-11-27 06:30:55.426384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.088 [2024-11-27 06:30:55.428680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.088 [2024-11-27 06:30:55.428775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.088 [2024-11-27 06:30:55.428787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.088 [2024-11-27 06:30:55.431098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.088 [2024-11-27 06:30:55.431193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.088 [2024-11-27 06:30:55.431213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.088 [2024-11-27 06:30:55.433511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.088 [2024-11-27 06:30:55.433607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.089 [2024-11-27 06:30:55.433627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.089 [2024-11-27 06:30:55.435865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.089 [2024-11-27 06:30:55.435983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.089 [2024-11-27 06:30:55.436002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.089 [2024-11-27 06:30:55.438276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.089 [2024-11-27 06:30:55.438396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.089 [2024-11-27 06:30:55.438416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.089 [2024-11-27 06:30:55.440703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.089 [2024-11-27 06:30:55.440819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.089 [2024-11-27 06:30:55.440831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.089 [2024-11-27 06:30:55.443093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.089 [2024-11-27 06:30:55.443190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.089 [2024-11-27 06:30:55.443210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.089 [2024-11-27 06:30:55.445514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.089 [2024-11-27 06:30:55.445618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.089 [2024-11-27 06:30:55.445638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.089 [2024-11-27 06:30:55.447940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.089 [2024-11-27 06:30:55.448017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.089 [2024-11-27 06:30:55.448029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.089 [2024-11-27 06:30:55.450366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.089 [2024-11-27 06:30:55.450446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.089 [2024-11-27 06:30:55.450459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.089 [2024-11-27 06:30:55.452777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.089 [2024-11-27 06:30:55.452855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.089 [2024-11-27 06:30:55.452868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.089 [2024-11-27 06:30:55.455205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.089 [2024-11-27 06:30:55.455287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.089 [2024-11-27 06:30:55.455299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.089 [2024-11-27 06:30:55.457620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.089 [2024-11-27 06:30:55.457695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.089 [2024-11-27 06:30:55.457708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.089 [2024-11-27 06:30:55.460036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.089 [2024-11-27 06:30:55.460114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.089 [2024-11-27 06:30:55.460126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.089 [2024-11-27 06:30:55.462457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.089 [2024-11-27 06:30:55.462553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.089 [2024-11-27 06:30:55.462566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.089 [2024-11-27 06:30:55.464890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.089 [2024-11-27 06:30:55.465002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.089 [2024-11-27 06:30:55.465023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.089 [2024-11-27 06:30:55.467301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.089 [2024-11-27 06:30:55.467403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.089 [2024-11-27 06:30:55.467423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.089 [2024-11-27 06:30:55.469687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.089 [2024-11-27 06:30:55.469820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.089 [2024-11-27 06:30:55.469839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.089 [2024-11-27 06:30:55.472100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.089 [2024-11-27 06:30:55.472207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.089 [2024-11-27 06:30:55.472227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.089 [2024-11-27 06:30:55.474519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.089 [2024-11-27 06:30:55.474620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.089 [2024-11-27 06:30:55.474639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.089 [2024-11-27 06:30:55.476982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.089 [2024-11-27 06:30:55.477073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.089 [2024-11-27 06:30:55.477087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.089 [2024-11-27 06:30:55.479451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.089 [2024-11-27 06:30:55.479544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.089 [2024-11-27 06:30:55.479557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.089 [2024-11-27 06:30:55.481948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.089 [2024-11-27 06:30:55.482027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.089 [2024-11-27 06:30:55.482040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.089 [2024-11-27 06:30:55.484349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.089 [2024-11-27 06:30:55.484448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.089 [2024-11-27 06:30:55.484467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.089 [2024-11-27 06:30:55.486747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.089 [2024-11-27 06:30:55.486845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.089 [2024-11-27 06:30:55.486865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.089 [2024-11-27 06:30:55.489168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.089 [2024-11-27 06:30:55.489280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.089 [2024-11-27 06:30:55.489299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.089 [2024-11-27 06:30:55.491598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.089 [2024-11-27 06:30:55.491689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.089 [2024-11-27 06:30:55.491701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.089 [2024-11-27 06:30:55.494020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.089 [2024-11-27 06:30:55.494123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.089 [2024-11-27 06:30:55.494143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.089 [2024-11-27 06:30:55.496425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.089 [2024-11-27 06:30:55.496506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.089 [2024-11-27 06:30:55.496519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.089 [2024-11-27 06:30:55.498841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.089 [2024-11-27 06:30:55.498936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.090 [2024-11-27 06:30:55.498948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.090 [2024-11-27 06:30:55.501253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.090 [2024-11-27 06:30:55.501346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.090 [2024-11-27 06:30:55.501368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.090 [2024-11-27 06:30:55.503665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.090 [2024-11-27 06:30:55.503763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.090 [2024-11-27 06:30:55.503783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.090 [2024-11-27 06:30:55.506058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.090 [2024-11-27 06:30:55.506190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.090 [2024-11-27 06:30:55.506210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.090 [2024-11-27 06:30:55.508445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.090 [2024-11-27 06:30:55.508550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.090 [2024-11-27 06:30:55.508570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.090 [2024-11-27 06:30:55.510868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.090 [2024-11-27 06:30:55.510962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.090 [2024-11-27 06:30:55.510974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.090 [2024-11-27 06:30:55.513323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.090 [2024-11-27 06:30:55.513405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.090 [2024-11-27 06:30:55.513418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.090 [2024-11-27 06:30:55.515748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.090 [2024-11-27 06:30:55.515825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.090 [2024-11-27 06:30:55.515837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.090 [2024-11-27 06:30:55.518157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.090 [2024-11-27 06:30:55.518291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.090 [2024-11-27 06:30:55.518310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.090 [2024-11-27 06:30:55.520573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.090 [2024-11-27 06:30:55.520696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.090 [2024-11-27 06:30:55.520716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.090 [2024-11-27 06:30:55.523028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.090 [2024-11-27 06:30:55.523131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.090 [2024-11-27 06:30:55.523148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.090 [2024-11-27 06:30:55.525454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.090 [2024-11-27 06:30:55.525559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.090 [2024-11-27 06:30:55.525571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.090 [2024-11-27 06:30:55.527879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.090 [2024-11-27 06:30:55.527966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.090 [2024-11-27 06:30:55.527985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.090 [2024-11-27 06:30:55.530380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.090 [2024-11-27 06:30:55.530455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.090 [2024-11-27 06:30:55.530468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.090 [2024-11-27 06:30:55.532834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.090 [2024-11-27 06:30:55.532928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.090 [2024-11-27 06:30:55.532941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.090 [2024-11-27 06:30:55.535338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.090 [2024-11-27 06:30:55.535422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.090 [2024-11-27 06:30:55.535435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.090 [2024-11-27 06:30:55.537821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.090 [2024-11-27 06:30:55.537927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.090 [2024-11-27 06:30:55.537944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.090 [2024-11-27 06:30:55.540303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.090 [2024-11-27 06:30:55.540384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.090 [2024-11-27 06:30:55.540404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.090 [2024-11-27 06:30:55.542765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.090 [2024-11-27 06:30:55.542841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.090 [2024-11-27 06:30:55.542853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.090 [2024-11-27 06:30:55.545194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.090 [2024-11-27 06:30:55.545279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.090 [2024-11-27 06:30:55.545291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.090 [2024-11-27 06:30:55.547617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.090 [2024-11-27 06:30:55.547691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.090 [2024-11-27 06:30:55.547703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.090 [2024-11-27 06:30:55.550047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.090 [2024-11-27 06:30:55.550181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.090 [2024-11-27 06:30:55.550193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.090 [2024-11-27 06:30:55.552514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.090 [2024-11-27 06:30:55.552611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.090 [2024-11-27 06:30:55.552624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.090 [2024-11-27 06:30:55.555005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.090 [2024-11-27 06:30:55.555098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.090 [2024-11-27 06:30:55.555110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.090 [2024-11-27 06:30:55.557493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.090 [2024-11-27 06:30:55.557570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.090 [2024-11-27 06:30:55.557583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.090 [2024-11-27 06:30:55.559951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.090 [2024-11-27 06:30:55.560058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.090 [2024-11-27 06:30:55.560070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.090 [2024-11-27 06:30:55.562442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.090 [2024-11-27 06:30:55.562528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.090 [2024-11-27 06:30:55.562541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.090 [2024-11-27 06:30:55.564829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.090 [2024-11-27 06:30:55.564971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.091 [2024-11-27 06:30:55.564990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.091 [2024-11-27 06:30:55.567258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.091 [2024-11-27 06:30:55.567371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.091 [2024-11-27 06:30:55.567391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.091 [2024-11-27 06:30:55.569692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.091 [2024-11-27 06:30:55.569800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.091 [2024-11-27 06:30:55.569819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.091 [2024-11-27 06:30:55.572082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.091 [2024-11-27 06:30:55.572185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.091 [2024-11-27 06:30:55.572204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.091 [2024-11-27 06:30:55.574505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.091 [2024-11-27 06:30:55.574627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.091 [2024-11-27 06:30:55.574647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.091 [2024-11-27 06:30:55.576946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.091 [2024-11-27 06:30:55.577038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.091 [2024-11-27 06:30:55.577055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.091 [2024-11-27 06:30:55.579330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.091 [2024-11-27 06:30:55.579447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.091 [2024-11-27 06:30:55.579466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.091 [2024-11-27 06:30:55.581754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.091 [2024-11-27 06:30:55.581888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.091 [2024-11-27 06:30:55.581916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.091 [2024-11-27 06:30:55.584190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.091 [2024-11-27 06:30:55.584267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.091 [2024-11-27 06:30:55.584279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.091 [2024-11-27 06:30:55.586621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.091 [2024-11-27 06:30:55.586714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.091 [2024-11-27 06:30:55.586727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.091 [2024-11-27 06:30:55.589083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.091 [2024-11-27 06:30:55.589180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.091 [2024-11-27 06:30:55.589197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.091 [2024-11-27 06:30:55.591498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.091 [2024-11-27 06:30:55.591579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.091 [2024-11-27 06:30:55.591592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.091 [2024-11-27 06:30:55.593924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.091 [2024-11-27 06:30:55.594003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.091 [2024-11-27 06:30:55.594016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.091 [2024-11-27 06:30:55.596321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.091 [2024-11-27 06:30:55.596420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.091 [2024-11-27 06:30:55.596440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.091 [2024-11-27 06:30:55.598721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.091 [2024-11-27 06:30:55.598836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.091 [2024-11-27 06:30:55.598856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.091 [2024-11-27 06:30:55.601171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.091 [2024-11-27 06:30:55.601304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.091 [2024-11-27 06:30:55.601323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.091 [2024-11-27 06:30:55.603585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.091 [2024-11-27 06:30:55.603703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.091 [2024-11-27 06:30:55.603722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.091 [2024-11-27 06:30:55.606012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.091 [2024-11-27 06:30:55.606108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.091 [2024-11-27 06:30:55.606127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.091 [2024-11-27 06:30:55.608395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.091 [2024-11-27 06:30:55.608529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.091 [2024-11-27 06:30:55.608549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.091 [2024-11-27 06:30:55.610845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.091 [2024-11-27 06:30:55.610956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.091 [2024-11-27 06:30:55.610975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.091 [2024-11-27 06:30:55.613266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.091 [2024-11-27 06:30:55.613400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.091 [2024-11-27 06:30:55.613417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.355 [2024-11-27 06:30:55.615687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.355 [2024-11-27 06:30:55.615801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.355 [2024-11-27 06:30:55.615821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.355 [2024-11-27 06:30:55.618174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.355 [2024-11-27 06:30:55.618309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.355 [2024-11-27 06:30:55.618329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.355 [2024-11-27 06:30:55.620665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.355 [2024-11-27 06:30:55.620806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.355 [2024-11-27 06:30:55.620825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.355 [2024-11-27 06:30:55.623168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.355 [2024-11-27 06:30:55.623304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.355 [2024-11-27 06:30:55.623324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.355 [2024-11-27 06:30:55.625642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.355 [2024-11-27 06:30:55.625741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.355 [2024-11-27 06:30:55.625760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.355 [2024-11-27 06:30:55.628141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.355 [2024-11-27 06:30:55.628252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.355 [2024-11-27 06:30:55.628265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.355 [2024-11-27 06:30:55.630604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.355 [2024-11-27 06:30:55.630682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.355 [2024-11-27 06:30:55.630695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.355 [2024-11-27 06:30:55.633034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.355 [2024-11-27 06:30:55.633132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.355 [2024-11-27 06:30:55.633152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.355 [2024-11-27 06:30:55.635435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.355 [2024-11-27 06:30:55.635567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.355 [2024-11-27 06:30:55.635587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.355 [2024-11-27 06:30:55.637842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.355 [2024-11-27 06:30:55.637951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.355 [2024-11-27 06:30:55.637969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.355 [2024-11-27 06:30:55.640261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.355 [2024-11-27 06:30:55.640394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.355 [2024-11-27 06:30:55.640407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.355 [2024-11-27 06:30:55.642695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.355 [2024-11-27 06:30:55.642790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.355 [2024-11-27 06:30:55.642802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.355 [2024-11-27 06:30:55.645108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.355 [2024-11-27 06:30:55.645197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.355 [2024-11-27 06:30:55.645220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.355 [2024-11-27 06:30:55.647519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.355 [2024-11-27 06:30:55.647622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.355 [2024-11-27 06:30:55.647642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.355 [2024-11-27 06:30:55.649961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.355 [2024-11-27 06:30:55.650049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.355 [2024-11-27 06:30:55.650062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.356 [2024-11-27 06:30:55.652380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.356 [2024-11-27 06:30:55.652470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.356 [2024-11-27 06:30:55.652490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.356 [2024-11-27 06:30:55.654898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.356 [2024-11-27 06:30:55.654987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.356 [2024-11-27 06:30:55.655000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.356 [2024-11-27 06:30:55.657387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.356 [2024-11-27 06:30:55.657472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.356 [2024-11-27 06:30:55.657484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.356 [2024-11-27 06:30:55.659862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.356 [2024-11-27 06:30:55.659950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.356 [2024-11-27 06:30:55.659963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.356 [2024-11-27 06:30:55.662360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.356 [2024-11-27 06:30:55.662433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.356 [2024-11-27 06:30:55.662446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.356 [2024-11-27 06:30:55.664794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.356 [2024-11-27 06:30:55.664873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.356 [2024-11-27 06:30:55.664885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.356 [2024-11-27 06:30:55.667249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.356 [2024-11-27 06:30:55.667326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.356 [2024-11-27 06:30:55.667338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.356 [2024-11-27 06:30:55.669647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.356 [2024-11-27 06:30:55.669746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.356 [2024-11-27 06:30:55.669758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.356 [2024-11-27 06:30:55.672044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.356 [2024-11-27 06:30:55.672139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.356 [2024-11-27 06:30:55.672159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.356 [2024-11-27 06:30:55.674431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.356 [2024-11-27 06:30:55.674527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.356 [2024-11-27 06:30:55.674547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.356 [2024-11-27 06:30:55.676829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.356 [2024-11-27 06:30:55.676960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.356 [2024-11-27 06:30:55.676978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.356 [2024-11-27 06:30:55.679252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.356 [2024-11-27 06:30:55.679357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.356 [2024-11-27 06:30:55.679376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.356 [2024-11-27 06:30:55.681626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.356 [2024-11-27 06:30:55.681743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.356 [2024-11-27 06:30:55.681762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.356 [2024-11-27 06:30:55.684077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.356 [2024-11-27 06:30:55.684172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.356 [2024-11-27 06:30:55.684191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.356 [2024-11-27 06:30:55.686457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.356 [2024-11-27 06:30:55.686580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.356 [2024-11-27 06:30:55.686600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.356 [2024-11-27 06:30:55.688916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.356 [2024-11-27 06:30:55.689020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.356 [2024-11-27 06:30:55.689039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.356 [2024-11-27 06:30:55.691293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.356 [2024-11-27 06:30:55.691418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.356 [2024-11-27 06:30:55.691437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.356 [2024-11-27 06:30:55.693678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.356 [2024-11-27 06:30:55.693778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.356 [2024-11-27 06:30:55.693797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.356 [2024-11-27 06:30:55.696108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.356 [2024-11-27 06:30:55.696207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.356 [2024-11-27 06:30:55.696227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.356 [2024-11-27 06:30:55.698534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.356 [2024-11-27 06:30:55.698668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.356 [2024-11-27 06:30:55.698687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.356 [2024-11-27 06:30:55.700992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.356 [2024-11-27 06:30:55.701121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.356 [2024-11-27 06:30:55.701140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.356 [2024-11-27 06:30:55.703390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.356 [2024-11-27 06:30:55.703511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.356 [2024-11-27 06:30:55.703531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.356 [2024-11-27 06:30:55.705828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.356 [2024-11-27 06:30:55.705930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.356 [2024-11-27 06:30:55.705943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.356 [2024-11-27 06:30:55.708272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.356 [2024-11-27 06:30:55.708350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.356 [2024-11-27 06:30:55.708362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.356 [2024-11-27 06:30:55.710721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.356 [2024-11-27 06:30:55.710804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.356 [2024-11-27 06:30:55.710816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.356 [2024-11-27 06:30:55.713169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.356 [2024-11-27 06:30:55.713268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.356 [2024-11-27 06:30:55.713287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.356 [2024-11-27 06:30:55.715569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.356 [2024-11-27 06:30:55.715700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.356 [2024-11-27 06:30:55.715719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.357 [2024-11-27 06:30:55.718011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.357 [2024-11-27 06:30:55.718112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.357 [2024-11-27 06:30:55.718125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.357 [2024-11-27 06:30:55.720407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.357 [2024-11-27 06:30:55.720528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.357 [2024-11-27 06:30:55.720559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.357 [2024-11-27 06:30:55.722805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.357 [2024-11-27 06:30:55.722921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.357 [2024-11-27 06:30:55.722952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.357 [2024-11-27 06:30:55.725204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.357 [2024-11-27 06:30:55.725318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.357 [2024-11-27 06:30:55.725337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.357 [2024-11-27 06:30:55.727584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.357 [2024-11-27 06:30:55.727703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.357 [2024-11-27 06:30:55.727722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.357 [2024-11-27 06:30:55.729987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.357 [2024-11-27 06:30:55.730099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.357 [2024-11-27 06:30:55.730118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.357 [2024-11-27 06:30:55.732354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.357 [2024-11-27 06:30:55.732487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.357 [2024-11-27 06:30:55.732507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.357 [2024-11-27 06:30:55.734776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.357 [2024-11-27 06:30:55.734871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.357 [2024-11-27 06:30:55.734925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.357 [2024-11-27 06:30:55.737163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.357 [2024-11-27 06:30:55.737294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.357 [2024-11-27 06:30:55.737313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.357 [2024-11-27 06:30:55.739586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.357 [2024-11-27 06:30:55.739691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.357 [2024-11-27 06:30:55.739710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.357 [2024-11-27 06:30:55.742025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.357 [2024-11-27 06:30:55.742118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.357 [2024-11-27 06:30:55.742131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.357 [2024-11-27 06:30:55.744429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.357 [2024-11-27 06:30:55.744521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.357 [2024-11-27 06:30:55.744557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.357 [2024-11-27 06:30:55.746814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.357 [2024-11-27 06:30:55.746932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.357 [2024-11-27 06:30:55.746966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.357 [2024-11-27 06:30:55.749220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.357 [2024-11-27 06:30:55.749322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.357 [2024-11-27 06:30:55.749341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.357 [2024-11-27 06:30:55.751699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.357 [2024-11-27 06:30:55.751777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.357 [2024-11-27 06:30:55.751789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.357 [2024-11-27 06:30:55.754185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.357 [2024-11-27 06:30:55.754279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.357 [2024-11-27 06:30:55.754292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.357 [2024-11-27 06:30:55.756651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.357 [2024-11-27 06:30:55.756730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.357 [2024-11-27 06:30:55.756749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.357 [2024-11-27 06:30:55.759069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.357 [2024-11-27 06:30:55.759144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.357 [2024-11-27 06:30:55.759156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.357 [2024-11-27 06:30:55.761449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.357 [2024-11-27 06:30:55.761545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.357 [2024-11-27 06:30:55.761564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.357 [2024-11-27 06:30:55.763831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.357 [2024-11-27 06:30:55.763975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.357 [2024-11-27 06:30:55.763994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.357 [2024-11-27 06:30:55.766256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.357 [2024-11-27 06:30:55.766340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.357 [2024-11-27 06:30:55.766352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.357 [2024-11-27 06:30:55.768655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.357 [2024-11-27 06:30:55.768731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.357 [2024-11-27 06:30:55.768751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.357 [2024-11-27 06:30:55.771088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.357 [2024-11-27 06:30:55.771181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.357 [2024-11-27 06:30:55.771193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.357 [2024-11-27 06:30:55.773461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.357 [2024-11-27 06:30:55.773592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.357 [2024-11-27 06:30:55.773611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.357 [2024-11-27 06:30:55.775894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.357 [2024-11-27 06:30:55.776033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.357 [2024-11-27 06:30:55.776052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.357 [2024-11-27 06:30:55.778334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.357 [2024-11-27 06:30:55.778431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.357 [2024-11-27 06:30:55.778450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.357 [2024-11-27 06:30:55.780722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.357 [2024-11-27 06:30:55.780819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.357 [2024-11-27 06:30:55.780859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.357 [2024-11-27 06:30:55.783131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.358 [2024-11-27 06:30:55.783235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.358 [2024-11-27 06:30:55.783254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.358 [2024-11-27 06:30:55.785536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.358 [2024-11-27 06:30:55.785611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.358 [2024-11-27 06:30:55.785623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.358 [2024-11-27 06:30:55.787967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.358 [2024-11-27 06:30:55.788061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.358 [2024-11-27 06:30:55.788080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.358 [2024-11-27 06:30:55.790336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.358 [2024-11-27 06:30:55.790471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.358 [2024-11-27 06:30:55.790489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.358 [2024-11-27 06:30:55.792767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.358 [2024-11-27 06:30:55.792876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.358 [2024-11-27 06:30:55.792895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.358 [2024-11-27 06:30:55.795193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.358 [2024-11-27 06:30:55.795304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.358 [2024-11-27 06:30:55.795323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.358 [2024-11-27 06:30:55.797593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.358 [2024-11-27 06:30:55.797707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.358 [2024-11-27 06:30:55.797726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.358 [2024-11-27 06:30:55.800031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.358 [2024-11-27 06:30:55.800125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.358 [2024-11-27 06:30:55.800146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.358 [2024-11-27 06:30:55.802371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.358 [2024-11-27 06:30:55.802480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.358 [2024-11-27 06:30:55.802500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.358 [2024-11-27 06:30:55.804810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.358 [2024-11-27 06:30:55.804918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.358 [2024-11-27 06:30:55.804934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.358 [2024-11-27 06:30:55.807276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.358 [2024-11-27 06:30:55.807380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.358 [2024-11-27 06:30:55.807399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.358 [2024-11-27 06:30:55.809779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.358 [2024-11-27 06:30:55.809861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.358 [2024-11-27 06:30:55.809873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.358 [2024-11-27 06:30:55.812228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.358 [2024-11-27 06:30:55.812308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.358 [2024-11-27 06:30:55.812320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.358 [2024-11-27 06:30:55.814614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.358 [2024-11-27 06:30:55.814705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.358 [2024-11-27 06:30:55.814717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.358 [2024-11-27 06:30:55.817030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.358 [2024-11-27 06:30:55.817104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.358 [2024-11-27 06:30:55.817116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.358 [2024-11-27 06:30:55.819409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.358 [2024-11-27 06:30:55.819487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.358 [2024-11-27 06:30:55.819499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.358 [2024-11-27 06:30:55.821797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.358 [2024-11-27 06:30:55.821877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.358 [2024-11-27 06:30:55.821889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.358 [2024-11-27 06:30:55.824199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.358 [2024-11-27 06:30:55.824317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.358 [2024-11-27 06:30:55.824336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.358 [2024-11-27 06:30:55.826648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.358 [2024-11-27 06:30:55.826758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.358 [2024-11-27 06:30:55.826777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.358 [2024-11-27 06:30:55.829075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.358 [2024-11-27 06:30:55.829187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.358 [2024-11-27 06:30:55.829203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.358 [2024-11-27 06:30:55.831465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.358 [2024-11-27 06:30:55.831595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.358 [2024-11-27 06:30:55.831615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.358 [2024-11-27 06:30:55.833884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.358 [2024-11-27 06:30:55.834015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.358 [2024-11-27 06:30:55.834034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.358 [2024-11-27 06:30:55.836296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.358 [2024-11-27 06:30:55.836402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.358 [2024-11-27 06:30:55.836420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.358 [2024-11-27 06:30:55.838711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.358 [2024-11-27 06:30:55.838791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.358 [2024-11-27 06:30:55.838803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.358 [2024-11-27 06:30:55.841139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.358 [2024-11-27 06:30:55.841211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.358 [2024-11-27 06:30:55.841223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.358 [2024-11-27 06:30:55.843523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.358 [2024-11-27 06:30:55.843614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.358 [2024-11-27 06:30:55.843635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.358 [2024-11-27 06:30:55.845967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.358 [2024-11-27 06:30:55.846045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.358 [2024-11-27 06:30:55.846057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.358 [2024-11-27 06:30:55.848361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.359 [2024-11-27 06:30:55.848489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.359 [2024-11-27 06:30:55.848507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.359 [2024-11-27 06:30:55.850779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.359 [2024-11-27 06:30:55.850875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.359 [2024-11-27 06:30:55.850895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.359 [2024-11-27 06:30:55.853179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.359 [2024-11-27 06:30:55.853292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.359 [2024-11-27 06:30:55.853311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.359 [2024-11-27 06:30:55.855563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.359 [2024-11-27 06:30:55.855672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.359 [2024-11-27 06:30:55.855691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.359 [2024-11-27 06:30:55.857987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.359 [2024-11-27 06:30:55.858096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.359 [2024-11-27 06:30:55.858115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.359 [2024-11-27 06:30:55.860411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.359 [2024-11-27 06:30:55.860551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.359 [2024-11-27 06:30:55.860570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.359 [2024-11-27 06:30:55.862818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.359 [2024-11-27 06:30:55.862934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.359 [2024-11-27 06:30:55.862973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.359 [2024-11-27 06:30:55.865194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dd18e0) with pdu=0x200016eff3c8 00:28:36.359 [2024-11-27 06:30:55.865318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.359 [2024-11-27 06:30:55.865337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.359 12157.00 IOPS, 1519.62 MiB/s 00:28:36.359 Latency(us) 00:28:36.359 [2024-11-27T06:30:55.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.359 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:36.359 nvme0n1 : 2.00 12152.37 1519.05 0.00 0.00 1313.55 901.12 7360.20 00:28:36.359 [2024-11-27T06:30:55.889Z] =================================================================================================================== 00:28:36.359 [2024-11-27T06:30:55.889Z] Total : 12152.37 1519.05 0.00 0.00 1313.55 901.12 7360.20 00:28:36.359 { 00:28:36.359 "results": [ 00:28:36.359 { 00:28:36.359 "job": "nvme0n1", 00:28:36.359 "core_mask": "0x2", 00:28:36.359 "workload": "randwrite", 00:28:36.359 "status": "finished", 00:28:36.359 "queue_depth": 16, 00:28:36.359 "io_size": 131072, 00:28:36.359 "runtime": 2.002079, 00:28:36.359 "iops": 12152.367613865386, 00:28:36.359 "mibps": 1519.0459517331733, 00:28:36.359 "io_failed": 0, 00:28:36.359 "io_timeout": 0, 00:28:36.359 "avg_latency_us": 1313.5520662682982, 00:28:36.359 "min_latency_us": 901.12, 00:28:36.359 "max_latency_us": 7360.196923076923 00:28:36.359 } 00:28:36.359 ], 00:28:36.359 "core_count": 1 00:28:36.359 } 00:28:36.620 06:30:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:36.620 06:30:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:36.620 | .driver_specific 00:28:36.620 | .nvme_error 00:28:36.620 | .status_code 00:28:36.620 | .command_transient_transport_error' 00:28:36.620 06:30:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:36.620 06:30:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:36.620 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 785 > 0 )) 00:28:36.620 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 113742 00:28:36.620 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 113742 ']' 00:28:36.620 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 113742 00:28:36.620 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:36.620 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:36.620 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 113742 00:28:36.620 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:36.620 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:36.620 killing process with pid 113742 00:28:36.620 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 113742' 00:28:36.620 Received shutdown signal, test time was about 2.000000 seconds 00:28:36.620 00:28:36.620 Latency(us) 00:28:36.620 [2024-11-27T06:30:56.150Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.620 [2024-11-27T06:30:56.150Z] =================================================================================================================== 00:28:36.620 [2024-11-27T06:30:56.150Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:36.620 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 113742 00:28:36.620 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 113742 00:28:36.880 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 113444 00:28:36.880 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 113444 ']' 00:28:36.880 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 113444 00:28:36.880 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:36.880 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:36.880 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 113444 00:28:36.880 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:36.880 killing process with pid 113444 00:28:36.880 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:36.880 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 113444' 00:28:36.880 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 113444 00:28:36.880 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 113444 00:28:36.880 00:28:36.880 real 0m16.032s 00:28:36.880 user 0m31.016s 00:28:36.880 sys 0m3.385s 00:28:36.880 ************************************ 00:28:36.880 END TEST nvmf_digest_error 00:28:36.880 ************************************ 00:28:36.880 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:36.880 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:36.880 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:36.880 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:36.880 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:36.880 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:28:37.142 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:37.142 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:28:37.142 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:37.142 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:37.142 rmmod nvme_tcp 00:28:37.142 rmmod nvme_fabrics 00:28:37.142 rmmod nvme_keyring 00:28:37.142 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:37.142 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:28:37.142 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:28:37.142 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 113444 ']' 00:28:37.142 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 113444 00:28:37.142 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 113444 ']' 00:28:37.142 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 113444 00:28:37.142 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (113444) - No such process 00:28:37.142 Process with pid 113444 is not found 00:28:37.142 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 113444 is not found' 00:28:37.142 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:37.142 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:37.142 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:37.142 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:28:37.142 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:28:37.142 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:37.142 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:28:37.142 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:37.142 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:37.142 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:37.142 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:37.142 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:37.142 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:37.142 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:37.142 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:37.142 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:37.142 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:37.142 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:37.142 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:37.142 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:37.402 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:37.402 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:37.402 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:37.402 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.402 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:37.402 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.402 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:28:37.402 00:28:37.402 real 0m33.006s 00:28:37.402 user 1m2.092s 00:28:37.402 sys 0m7.137s 00:28:37.402 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:37.402 ************************************ 00:28:37.402 END TEST nvmf_digest 00:28:37.402 ************************************ 00:28:37.402 06:30:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:37.402 06:30:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 1 -eq 1 ]] 00:28:37.402 06:30:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ tcp == \t\c\p ]] 00:28:37.402 06:30:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@38 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:28:37.402 06:30:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:37.402 06:30:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:37.402 06:30:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.402 ************************************ 00:28:37.402 START TEST nvmf_mdns_discovery 00:28:37.402 ************************************ 00:28:37.402 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:28:37.402 * Looking for test storage... 00:28:37.402 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:37.402 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:37.402 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:28:37.402 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:37.661 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:37.661 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:37.661 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:37.661 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:37.661 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:28:37.661 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:28:37.661 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:28:37.661 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:28:37.661 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:28:37.661 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:28:37.661 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:28:37.661 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:37.661 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@344 -- # case "$op" in 00:28:37.661 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@345 -- # : 1 00:28:37.661 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:37.661 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:37.661 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # decimal 1 00:28:37.661 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=1 00:28:37.661 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:37.661 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 1 00:28:37.661 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:28:37.661 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # decimal 2 00:28:37.661 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=2 00:28:37.661 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:37.661 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 2 00:28:37.661 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:28:37.661 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:37.661 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:37.661 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # return 0 00:28:37.661 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:37.661 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:37.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.661 --rc genhtml_branch_coverage=1 00:28:37.661 --rc genhtml_function_coverage=1 00:28:37.661 --rc genhtml_legend=1 00:28:37.661 --rc geninfo_all_blocks=1 00:28:37.661 --rc geninfo_unexecuted_blocks=1 00:28:37.661 00:28:37.662 ' 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:37.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.662 --rc genhtml_branch_coverage=1 00:28:37.662 --rc genhtml_function_coverage=1 00:28:37.662 --rc genhtml_legend=1 00:28:37.662 --rc geninfo_all_blocks=1 00:28:37.662 --rc geninfo_unexecuted_blocks=1 00:28:37.662 00:28:37.662 ' 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:37.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.662 --rc genhtml_branch_coverage=1 00:28:37.662 --rc genhtml_function_coverage=1 00:28:37.662 --rc genhtml_legend=1 00:28:37.662 --rc geninfo_all_blocks=1 00:28:37.662 --rc geninfo_unexecuted_blocks=1 00:28:37.662 00:28:37.662 ' 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:37.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.662 --rc genhtml_branch_coverage=1 00:28:37.662 --rc genhtml_function_coverage=1 00:28:37.662 --rc genhtml_legend=1 00:28:37.662 --rc geninfo_all_blocks=1 00:28:37.662 --rc geninfo_unexecuted_blocks=1 00:28:37.662 00:28:37.662 ' 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # : 0 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:37.662 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:28:37.662 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:28:37.663 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:28:37.663 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:28:37.663 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:37.663 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:37.663 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:37.663 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:37.663 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:37.663 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:37.663 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:37.663 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:37.663 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:37.663 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:37.663 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:37.663 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:37.663 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:37.663 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:37.663 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:37.663 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:37.663 06:30:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:37.663 Cannot find device "nvmf_init_br" 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:37.663 Cannot find device "nvmf_init_br2" 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:37.663 Cannot find device "nvmf_tgt_br" 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # true 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:37.663 Cannot find device "nvmf_tgt_br2" 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # true 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:37.663 Cannot find device "nvmf_init_br" 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # true 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:37.663 Cannot find device "nvmf_init_br2" 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # true 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:37.663 Cannot find device "nvmf_tgt_br" 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # true 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:37.663 Cannot find device "nvmf_tgt_br2" 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # true 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:37.663 Cannot find device "nvmf_br" 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # true 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:37.663 Cannot find device "nvmf_init_if" 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # true 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:37.663 Cannot find device "nvmf_init_if2" 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # true 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:37.663 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # true 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:37.663 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # true 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:37.663 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:37.921 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:37.921 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:37.921 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:37.921 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:37.921 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:37.921 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:37.921 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:37.921 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:37.921 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:37.921 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:37.921 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:37.921 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:28:37.921 00:28:37.921 --- 10.0.0.3 ping statistics --- 00:28:37.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.921 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:28:37.921 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:37.921 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:37.921 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.029 ms 00:28:37.921 00:28:37.921 --- 10.0.0.4 ping statistics --- 00:28:37.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.921 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:28:37.921 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:37.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:37.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:28:37.921 00:28:37.922 --- 10.0.0.1 ping statistics --- 00:28:37.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.922 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:28:37.922 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:37.922 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:37.922 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:28:37.922 00:28:37.922 --- 10.0.0.2 ping statistics --- 00:28:37.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.922 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:28:37.922 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:37.922 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@461 -- # return 0 00:28:37.922 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:37.922 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:37.922 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:37.922 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:37.922 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:37.922 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:37.922 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:37.922 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:28:37.922 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:37.922 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:37.922 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:37.922 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@509 -- # nvmfpid=114076 00:28:37.922 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@510 -- # waitforlisten 114076 00:28:37.922 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:28:37.922 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # '[' -z 114076 ']' 00:28:37.922 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:37.922 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:37.922 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:37.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:37.922 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:37.922 06:30:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:37.922 [2024-11-27 06:30:57.299233] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:28:37.922 [2024-11-27 06:30:57.299286] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:37.922 [2024-11-27 06:30:57.414100] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:37.922 [2024-11-27 06:30:57.440880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.180 [2024-11-27 06:30:57.457249] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:38.180 [2024-11-27 06:30:57.457279] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:38.180 [2024-11-27 06:30:57.457284] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:38.180 [2024-11-27 06:30:57.457287] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:38.180 [2024-11-27 06:30:57.457291] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:38.180 [2024-11-27 06:30:57.457486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:38.747 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:38.748 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@868 -- # return 0 00:28:38.748 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:38.748 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:38.748 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:38.748 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:38.748 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:28:38.748 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.748 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:38.748 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.748 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:28:38.748 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.748 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:38.748 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.748 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:38.748 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.748 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:38.748 [2024-11-27 06:30:58.263561] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:38.748 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.748 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:28:38.748 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.748 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:38.748 [2024-11-27 06:30:58.271637] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:28:38.748 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.748 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:28:38.748 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.748 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:39.006 null0 00:28:39.006 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.006 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:28:39.006 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.006 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:39.006 null1 00:28:39.006 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.006 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:28:39.006 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.006 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:39.006 null2 00:28:39.006 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.006 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:28:39.006 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.006 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:39.006 null3 00:28:39.006 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.006 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:28:39.006 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.006 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:39.006 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.006 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=114126 00:28:39.006 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 114126 /tmp/host.sock 00:28:39.006 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # '[' -z 114126 ']' 00:28:39.006 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:28:39.006 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:28:39.006 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:39.006 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:39.006 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:39.006 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:39.006 06:30:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:39.006 [2024-11-27 06:30:58.357008] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:28:39.006 [2024-11-27 06:30:58.357064] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114126 ] 00:28:39.006 [2024-11-27 06:30:58.470576] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:39.006 [2024-11-27 06:30:58.498978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.006 [2024-11-27 06:30:58.518385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.941 06:30:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:39.941 06:30:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@868 -- # return 0 00:28:39.941 06:30:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:28:39.941 06:30:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:28:39.941 06:30:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:28:39.941 06:30:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=114155 00:28:39.941 06:30:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:28:39.941 06:30:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:28:39.941 06:30:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:28:39.941 Process 1083 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:28:39.941 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:28:39.941 Successfully dropped root privileges. 00:28:39.941 avahi-daemon 0.8 starting up. 00:28:39.941 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:28:39.941 Successfully called chroot(). 00:28:39.941 Successfully dropped remaining capabilities. 00:28:39.941 No service file found in /etc/avahi/services. 00:28:40.873 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 00:28:40.873 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:28:40.873 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 00:28:40.873 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:28:40.873 Network interface enumeration completed. 00:28:40.873 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 00:28:40.873 Registering new address record for 10.0.0.4 on nvmf_tgt_if2.IPv4. 00:28:40.873 Registering new address record for fe80::3c47:c4ff:feac:c7a5 on nvmf_tgt_if.*. 00:28:40.873 Registering new address record for 10.0.0.3 on nvmf_tgt_if.IPv4. 00:28:40.873 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:28:40.873 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.873 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:40.873 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.873 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:28:40.873 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.873 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:40.873 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 1291598528. 00:28:40.873 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.873 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@114 -- # notify_id=0 00:28:40.873 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # get_subsystem_names 00:28:40.873 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:40.873 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.873 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:28:40.873 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:40.873 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:28:40.873 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:28:40.873 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.874 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # [[ '' == '' ]] 00:28:40.874 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # get_bdev_list 00:28:40.874 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:40.874 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:28:40.874 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:28:40.874 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.874 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:28:40.874 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:40.874 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.132 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # [[ '' == '' ]] 00:28:41.132 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@123 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:28:41.132 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.132 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:41.132 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.132 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # get_subsystem_names 00:28:41.132 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:41.132 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.132 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:28:41.132 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:28:41.132 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:41.132 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:28:41.132 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.132 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # [[ '' == '' ]] 00:28:41.132 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # get_bdev_list 00:28:41.132 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:41.132 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.132 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:28:41.132 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:28:41.132 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:41.132 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:28:41.132 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.132 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # [[ '' == '' ]] 00:28:41.132 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:28:41.132 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.132 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:41.132 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.132 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_subsystem_names 00:28:41.132 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ '' == '' ]] 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_bdev_list 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ '' == '' ]] 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:41.133 [2024-11-27 06:31:00.572036] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@140 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@145 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_publish_mdns_prr 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.133 06:31:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 5 00:28:41.133 [2024-11-27 06:31:00.644609] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:28:42.065 [2024-11-27 06:31:01.544621] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:28:42.629 [2024-11-27 06:31:01.944646] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:28:42.629 [2024-11-27 06:31:01.944679] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:28:42.629 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:28:42.629 cookie is 0 00:28:42.629 is_local: 1 00:28:42.629 our_own: 0 00:28:42.629 wide_area: 0 00:28:42.629 multicast: 1 00:28:42.629 cached: 1 00:28:42.629 [2024-11-27 06:31:02.044643] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:28:42.629 [2024-11-27 06:31:02.044672] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:28:42.630 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:28:42.630 cookie is 0 00:28:42.630 is_local: 1 00:28:42.630 our_own: 0 00:28:42.630 wide_area: 0 00:28:42.630 multicast: 1 00:28:42.630 cached: 1 00:28:43.592 [2024-11-27 06:31:02.945465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.592 [2024-11-27 06:31:02.945505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7270 with addr=10.0.0.4, port=8009 00:28:43.592 [2024-11-27 06:31:02.945520] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:43.592 [2024-11-27 06:31:02.945530] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:43.592 [2024-11-27 06:31:02.945535] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:28:43.592 [2024-11-27 06:31:03.056972] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:28:43.592 [2024-11-27 06:31:03.056987] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:28:43.592 [2024-11-27 06:31:03.056996] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:28:43.858 [2024-11-27 06:31:03.143046] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem mdns1_nvme0 00:28:43.858 [2024-11-27 06:31:03.197256] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:28:43.858 [2024-11-27 06:31:03.197721] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x10ef090:1 started. 00:28:43.858 [2024-11-27 06:31:03.198937] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:28:43.858 [2024-11-27 06:31:03.198954] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:28:43.858 [2024-11-27 06:31:03.205626] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x10ef090 was disconnected and freed. delete nvme_qpair. 00:28:44.425 [2024-11-27 06:31:03.945407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.425 [2024-11-27 06:31:03.945441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10eee50 with addr=10.0.0.4, port=8009 00:28:44.425 [2024-11-27 06:31:03.945451] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:44.425 [2024-11-27 06:31:03.945456] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:44.425 [2024-11-27 06:31:03.945461] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:28:45.799 [2024-11-27 06:31:04.945414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.799 [2024-11-27 06:31:04.945444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d7eb0 with addr=10.0.0.4, port=8009 00:28:45.799 [2024-11-27 06:31:04.945455] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:45.799 [2024-11-27 06:31:04.945460] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:45.799 [2024-11-27 06:31:04.945465] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:28:46.366 06:31:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 'not found' 00:28:46.366 06:31:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:28:46.366 06:31:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:28:46.366 06:31:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:28:46.366 06:31:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:28:46.366 06:31:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:28:46.366 06:31:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:28:46.366 06:31:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:28:46.366 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:28:46.366 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:28:46.366 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:28:46.366 06:31:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:28:46.366 06:31:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:46.366 06:31:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:28:46.366 06:31:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:46.366 06:31:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:28:46.366 06:31:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:46.366 06:31:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:28:46.366 06:31:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:46.366 06:31:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:28:46.366 06:31:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:28:46.366 06:31:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:28:46.366 06:31:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.4 -s 8009 00:28:46.366 06:31:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.366 06:31:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:46.366 [2024-11-27 06:31:05.649340] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 8009 *** 00:28:46.366 [2024-11-27 06:31:05.651653] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:28:46.366 [2024-11-27 06:31:05.651674] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:28:46.366 06:31:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.366 06:31:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:28:46.366 06:31:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.366 06:31:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:46.366 [2024-11-27 06:31:05.657294] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:28:46.366 [2024-11-27 06:31:05.657643] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:28:46.366 06:31:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.366 06:31:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@157 -- # sleep 1 00:28:46.366 [2024-11-27 06:31:05.788717] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:28:46.366 [2024-11-27 06:31:05.788814] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:28:46.366 [2024-11-27 06:31:05.874782] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:28:46.625 [2024-11-27 06:31:05.957207] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:28:46.625 [2024-11-27 06:31:05.957218] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:28:46.625 [2024-11-27 06:31:05.957225] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:28:46.625 [2024-11-27 06:31:06.043281] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 new subsystem mdns0_nvme0 00:28:46.625 [2024-11-27 06:31:06.097499] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr was created to 10.0.0.4:4420 00:28:46.625 [2024-11-27 06:31:06.097857] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0x10ec410:1 started. 00:28:46.625 [2024-11-27 06:31:06.098847] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:28:46.625 [2024-11-27 06:31:06.098860] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:28:46.625 [2024-11-27 06:31:06.105997] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0x10ec410 was disconnected and freed. delete nvme_qpair. 00:28:47.191 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 found 00:28:47.191 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:28:47.191 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:28:47.191 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:28:47.191 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:28:47.191 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:28:47.191 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:28:47.191 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:28:47.191 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:28:47.191 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:28:47.191 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:28:47.191 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:28:47.191 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:28:47.191 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:28:47.191 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:28:47.191 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:28:47.191 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:47.191 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:28:47.191 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:28:47.191 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:47.191 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:28:47.191 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:47.191 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:28:47.191 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:28:47.191 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:47.191 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:28:47.191 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:47.191 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:28:47.191 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\4* ]] 00:28:47.191 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:28:47.191 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:28:47.191 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:28:47.191 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # get_mdns_discovery_svcs 00:28:47.191 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:28:47.192 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:28:47.192 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:28:47.192 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:28:47.192 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.192 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:47.192 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.450 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # [[ mdns == \m\d\n\s ]] 00:28:47.450 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # get_discovery_ctrlrs 00:28:47.450 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:28:47.450 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:28:47.450 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:47.450 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.450 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:28:47.450 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:47.450 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.450 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:28:47.450 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:28:47.450 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:47.450 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.450 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:47.450 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:28:47.450 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:28:47.450 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:28:47.450 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.450 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:28:47.450 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:28:47.450 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:28:47.450 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:47.450 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.450 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:47.450 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:28:47.450 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:28:47.450 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.450 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:28:47.450 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:28:47.450 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:28:47.450 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.450 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:47.450 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:47.450 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:28:47.450 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:28:47.450 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.451 [2024-11-27 06:31:06.844672] bdev_mdns_client.c: 385:mdns_browse_handler: *ERROR*: (Browser) REMOVE: service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:28:47.451 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4420 == \4\4\2\0 ]] 00:28:47.451 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:28:47.451 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:28:47.451 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:47.451 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.451 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:28:47.451 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:28:47.451 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:47.451 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.451 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4420 == \4\4\2\0 ]] 00:28:47.451 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:28:47.451 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:47.451 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:28:47.451 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.451 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:47.451 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.451 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:28:47.451 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=2 00:28:47.451 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 2 == 2 ]] 00:28:47.451 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:28:47.451 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.451 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:47.451 [2024-11-27 06:31:06.927167] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x10d8e40:1 started. 00:28:47.451 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.451 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@173 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:28:47.451 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.451 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:47.451 [2024-11-27 06:31:06.936134] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x10d8e40 was disconnected and freed. delete nvme_qpair. 00:28:47.451 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.451 06:31:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # sleep 1 00:28:47.451 [2024-11-27 06:31:06.939183] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0x10eaf60:1 started. 00:28:47.451 [2024-11-27 06:31:06.944674] bdev_mdns_client.c: 385:mdns_browse_handler: *ERROR*: (Browser) REMOVE: service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:28:47.451 [2024-11-27 06:31:06.946054] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0x10eaf60 was disconnected and freed. delete nvme_qpair. 00:28:48.385 [2024-11-27 06:31:07.644688] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:28:48.385 [2024-11-27 06:31:07.644719] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:28:48.385 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:28:48.385 cookie is 0 00:28:48.385 is_local: 1 00:28:48.385 our_own: 0 00:28:48.385 wide_area: 0 00:28:48.385 multicast: 1 00:28:48.385 cached: 1 00:28:48.385 [2024-11-27 06:31:07.644727] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:28:48.385 [2024-11-27 06:31:07.744685] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:28:48.386 [2024-11-27 06:31:07.744705] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:28:48.386 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:28:48.386 cookie is 0 00:28:48.386 is_local: 1 00:28:48.386 our_own: 0 00:28:48.386 wide_area: 0 00:28:48.386 multicast: 1 00:28:48.386 cached: 1 00:28:48.386 [2024-11-27 06:31:07.744711] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:28:48.644 06:31:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:28:48.644 06:31:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:48.644 06:31:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.644 06:31:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:48.644 06:31:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:28:48.644 06:31:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:28:48.644 06:31:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:28:48.644 06:31:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.644 06:31:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:28:48.644 06:31:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:28:48.644 06:31:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:48.644 06:31:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.644 06:31:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:48.644 06:31:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:28:48.644 06:31:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.644 06:31:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:28:48.644 06:31:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:28:48.644 06:31:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 2 == 2 ]] 00:28:48.644 06:31:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:28:48.644 06:31:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.644 06:31:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:48.644 [2024-11-27 06:31:08.022289] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:28:48.644 [2024-11-27 06:31:08.023012] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:28:48.645 [2024-11-27 06:31:08.023034] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:28:48.645 [2024-11-27 06:31:08.023054] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:28:48.645 [2024-11-27 06:31:08.023061] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:28:48.645 06:31:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.645 06:31:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4421 00:28:48.645 06:31:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.645 06:31:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:48.645 [2024-11-27 06:31:08.030244] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4421 *** 00:28:48.645 [2024-11-27 06:31:08.030999] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:28:48.645 [2024-11-27 06:31:08.031028] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:28:48.645 06:31:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.645 06:31:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@184 -- # sleep 1 00:28:48.645 [2024-11-27 06:31:08.044687] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:28:48.645 [2024-11-27 06:31:08.044702] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:28:48.645 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:28:48.645 cookie is 0 00:28:48.645 is_local: 1 00:28:48.645 our_own: 0 00:28:48.645 wide_area: 0 00:28:48.645 multicast: 1 00:28:48.645 cached: 1 00:28:48.645 [2024-11-27 06:31:08.044708] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:28:48.645 [2024-11-27 06:31:08.144688] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:28:48.645 [2024-11-27 06:31:08.144706] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:28:48.645 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:28:48.645 cookie is 0 00:28:48.645 is_local: 1 00:28:48.645 our_own: 0 00:28:48.645 wide_area: 0 00:28:48.645 multicast: 1 00:28:48.645 cached: 1 00:28:48.645 [2024-11-27 06:31:08.144712] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:28:48.645 [2024-11-27 06:31:08.145539] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for mdns1_nvme0 00:28:48.645 [2024-11-27 06:31:08.162541] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new path for mdns0_nvme0 00:28:48.903 [2024-11-27 06:31:08.210834] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:28:48.903 [2024-11-27 06:31:08.210870] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:28:48.903 [2024-11-27 06:31:08.210876] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:28:48.903 [2024-11-27 06:31:08.210879] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:28:48.903 [2024-11-27 06:31:08.210887] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:28:48.903 [2024-11-27 06:31:08.220733] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 2] ctrlr was created to 10.0.0.4:4421 00:28:48.903 [2024-11-27 06:31:08.220765] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:28:48.903 [2024-11-27 06:31:08.220769] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:28:48.903 [2024-11-27 06:31:08.220772] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:28:48.903 [2024-11-27 06:31:08.220779] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:28:48.903 [2024-11-27 06:31:08.256601] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:28:48.903 [2024-11-27 06:31:08.256613] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:28:48.903 [2024-11-27 06:31:08.266610] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:28:48.903 [2024-11-27 06:31:08.266621] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_subsystem_names 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # get_subsystem_paths mdns0_nvme0 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # get_subsystem_paths mdns1_nvme0 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # get_notification_count 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ 0 == 0 ]] 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:49.852 [2024-11-27 06:31:09.223447] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:28:49.852 [2024-11-27 06:31:09.223468] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:28:49.852 [2024-11-27 06:31:09.223487] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:28:49.852 [2024-11-27 06:31:09.223495] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:28:49.852 [2024-11-27 06:31:09.226511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.852 [2024-11-27 06:31:09.226535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.852 [2024-11-27 06:31:09.226541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.852 [2024-11-27 06:31:09.226545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.852 [2024-11-27 06:31:09.226550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.852 [2024-11-27 06:31:09.226554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.852 [2024-11-27 06:31:09.226559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.852 [2024-11-27 06:31:09.226563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.852 [2024-11-27 06:31:09.226568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10bb1e0 is same with the state(6) to be set 00:28:49.852 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.853 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@196 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:28:49.853 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.853 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:49.853 [2024-11-27 06:31:09.235447] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:28:49.853 [2024-11-27 06:31:09.235474] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:28:49.853 [2024-11-27 06:31:09.236490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bb1e0 (9): Bad file descriptor 00:28:49.853 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.853 06:31:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # sleep 1 00:28:49.853 [2024-11-27 06:31:09.239351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.853 [2024-11-27 06:31:09.239362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.853 [2024-11-27 06:31:09.239368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.853 [2024-11-27 06:31:09.239372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.853 [2024-11-27 06:31:09.239377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.853 [2024-11-27 06:31:09.239381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.853 [2024-11-27 06:31:09.239385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.853 [2024-11-27 06:31:09.239389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.853 [2024-11-27 06:31:09.239393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f75e0 is same with the state(6) to be set 00:28:49.853 [2024-11-27 06:31:09.246498] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:49.853 [2024-11-27 06:31:09.246513] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:49.853 [2024-11-27 06:31:09.246516] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:49.853 [2024-11-27 06:31:09.246519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:49.853 [2024-11-27 06:31:09.246537] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:49.853 [2024-11-27 06:31:09.246582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.853 [2024-11-27 06:31:09.246591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bb1e0 with addr=10.0.0.3, port=4420 00:28:49.853 [2024-11-27 06:31:09.246596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10bb1e0 is same with the state(6) to be set 00:28:49.853 [2024-11-27 06:31:09.246604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bb1e0 (9): Bad file descriptor 00:28:49.853 [2024-11-27 06:31:09.246611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:49.853 [2024-11-27 06:31:09.246615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:49.853 [2024-11-27 06:31:09.246620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:49.853 [2024-11-27 06:31:09.246624] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:49.853 [2024-11-27 06:31:09.246627] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:49.853 [2024-11-27 06:31:09.246629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:49.853 [2024-11-27 06:31:09.249335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f75e0 (9): Bad file descriptor 00:28:49.853 [2024-11-27 06:31:09.256543] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:49.853 [2024-11-27 06:31:09.256556] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:49.853 [2024-11-27 06:31:09.256558] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:49.853 [2024-11-27 06:31:09.256560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:49.853 [2024-11-27 06:31:09.256574] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:49.853 [2024-11-27 06:31:09.256597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.853 [2024-11-27 06:31:09.256604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bb1e0 with addr=10.0.0.3, port=4420 00:28:49.853 [2024-11-27 06:31:09.256608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10bb1e0 is same with the state(6) to be set 00:28:49.853 [2024-11-27 06:31:09.256615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bb1e0 (9): Bad file descriptor 00:28:49.853 [2024-11-27 06:31:09.256621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:49.853 [2024-11-27 06:31:09.256625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:49.853 [2024-11-27 06:31:09.256628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:49.853 [2024-11-27 06:31:09.256632] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:49.853 [2024-11-27 06:31:09.256634] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:49.853 [2024-11-27 06:31:09.256636] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:49.853 [2024-11-27 06:31:09.259340] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:28:49.853 [2024-11-27 06:31:09.259352] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:28:49.853 [2024-11-27 06:31:09.259354] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:28:49.853 [2024-11-27 06:31:09.259356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:28:49.853 [2024-11-27 06:31:09.259370] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:28:49.853 [2024-11-27 06:31:09.259392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.853 [2024-11-27 06:31:09.259398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f75e0 with addr=10.0.0.4, port=4420 00:28:49.853 [2024-11-27 06:31:09.259402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f75e0 is same with the state(6) to be set 00:28:49.853 [2024-11-27 06:31:09.259408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f75e0 (9): Bad file descriptor 00:28:49.853 [2024-11-27 06:31:09.259414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:28:49.853 [2024-11-27 06:31:09.259418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:28:49.853 [2024-11-27 06:31:09.259422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:28:49.853 [2024-11-27 06:31:09.259426] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:28:49.853 [2024-11-27 06:31:09.259428] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:28:49.853 [2024-11-27 06:31:09.259430] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:28:49.853 [2024-11-27 06:31:09.266581] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:49.853 [2024-11-27 06:31:09.266594] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:49.853 [2024-11-27 06:31:09.266596] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:49.853 [2024-11-27 06:31:09.266598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:49.853 [2024-11-27 06:31:09.266612] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:49.853 [2024-11-27 06:31:09.266632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.853 [2024-11-27 06:31:09.266638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bb1e0 with addr=10.0.0.3, port=4420 00:28:49.853 [2024-11-27 06:31:09.266642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10bb1e0 is same with the state(6) to be set 00:28:49.853 [2024-11-27 06:31:09.266649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bb1e0 (9): Bad file descriptor 00:28:49.853 [2024-11-27 06:31:09.266655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:49.853 [2024-11-27 06:31:09.266659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:49.853 [2024-11-27 06:31:09.266663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:49.853 [2024-11-27 06:31:09.266666] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:49.853 [2024-11-27 06:31:09.266668] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:49.853 [2024-11-27 06:31:09.266670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:49.853 [2024-11-27 06:31:09.269375] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:28:49.853 [2024-11-27 06:31:09.269388] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:28:49.853 [2024-11-27 06:31:09.269390] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:28:49.853 [2024-11-27 06:31:09.269392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:28:49.853 [2024-11-27 06:31:09.269407] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:28:49.853 [2024-11-27 06:31:09.269427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.853 [2024-11-27 06:31:09.269434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f75e0 with addr=10.0.0.4, port=4420 00:28:49.853 [2024-11-27 06:31:09.269438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f75e0 is same with the state(6) to be set 00:28:49.853 [2024-11-27 06:31:09.269444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f75e0 (9): Bad file descriptor 00:28:49.854 [2024-11-27 06:31:09.269450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:28:49.854 [2024-11-27 06:31:09.269454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:28:49.854 [2024-11-27 06:31:09.269458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:28:49.854 [2024-11-27 06:31:09.269461] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:28:49.854 [2024-11-27 06:31:09.269463] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:28:49.854 [2024-11-27 06:31:09.269465] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:28:49.854 [2024-11-27 06:31:09.276617] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:49.854 [2024-11-27 06:31:09.276629] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:49.854 [2024-11-27 06:31:09.276631] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:49.854 [2024-11-27 06:31:09.276634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:49.854 [2024-11-27 06:31:09.276648] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:49.854 [2024-11-27 06:31:09.276668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.854 [2024-11-27 06:31:09.276674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bb1e0 with addr=10.0.0.3, port=4420 00:28:49.854 [2024-11-27 06:31:09.276678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10bb1e0 is same with the state(6) to be set 00:28:49.854 [2024-11-27 06:31:09.276685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bb1e0 (9): Bad file descriptor 00:28:49.854 [2024-11-27 06:31:09.276691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:49.854 [2024-11-27 06:31:09.276694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:49.854 [2024-11-27 06:31:09.276698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:49.854 [2024-11-27 06:31:09.276701] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:49.854 [2024-11-27 06:31:09.276704] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:49.854 [2024-11-27 06:31:09.276706] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:49.854 [2024-11-27 06:31:09.279412] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:28:49.854 [2024-11-27 06:31:09.279423] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:28:49.854 [2024-11-27 06:31:09.279425] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:28:49.854 [2024-11-27 06:31:09.279428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:28:49.854 [2024-11-27 06:31:09.279440] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:28:49.854 [2024-11-27 06:31:09.279459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.854 [2024-11-27 06:31:09.279466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f75e0 with addr=10.0.0.4, port=4420 00:28:49.854 [2024-11-27 06:31:09.279470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f75e0 is same with the state(6) to be set 00:28:49.854 [2024-11-27 06:31:09.279476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f75e0 (9): Bad file descriptor 00:28:49.854 [2024-11-27 06:31:09.279491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:28:49.854 [2024-11-27 06:31:09.279495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:28:49.854 [2024-11-27 06:31:09.279499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:28:49.854 [2024-11-27 06:31:09.279503] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:28:49.854 [2024-11-27 06:31:09.279505] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:28:49.854 [2024-11-27 06:31:09.279507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:28:49.854 [2024-11-27 06:31:09.286654] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:49.854 [2024-11-27 06:31:09.286669] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:49.854 [2024-11-27 06:31:09.286671] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:49.854 [2024-11-27 06:31:09.286674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:49.854 [2024-11-27 06:31:09.286689] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:49.854 [2024-11-27 06:31:09.286713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.854 [2024-11-27 06:31:09.286720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bb1e0 with addr=10.0.0.3, port=4420 00:28:49.854 [2024-11-27 06:31:09.286724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10bb1e0 is same with the state(6) to be set 00:28:49.854 [2024-11-27 06:31:09.286730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bb1e0 (9): Bad file descriptor 00:28:49.854 [2024-11-27 06:31:09.286736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:49.854 [2024-11-27 06:31:09.286740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:49.854 [2024-11-27 06:31:09.286744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:49.854 [2024-11-27 06:31:09.286747] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:49.854 [2024-11-27 06:31:09.286750] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:49.854 [2024-11-27 06:31:09.286752] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:49.854 [2024-11-27 06:31:09.289446] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:28:49.854 [2024-11-27 06:31:09.289458] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:28:49.854 [2024-11-27 06:31:09.289460] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:28:49.854 [2024-11-27 06:31:09.289463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:28:49.854 [2024-11-27 06:31:09.289477] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:28:49.854 [2024-11-27 06:31:09.289497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.854 [2024-11-27 06:31:09.289503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f75e0 with addr=10.0.0.4, port=4420 00:28:49.854 [2024-11-27 06:31:09.289507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f75e0 is same with the state(6) to be set 00:28:49.854 [2024-11-27 06:31:09.289521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f75e0 (9): Bad file descriptor 00:28:49.854 [2024-11-27 06:31:09.289528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:28:49.854 [2024-11-27 06:31:09.289531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:28:49.854 [2024-11-27 06:31:09.289535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:28:49.854 [2024-11-27 06:31:09.289538] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:28:49.854 [2024-11-27 06:31:09.289541] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:28:49.854 [2024-11-27 06:31:09.289543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:28:49.854 [2024-11-27 06:31:09.296696] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:49.854 [2024-11-27 06:31:09.296708] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:49.854 [2024-11-27 06:31:09.296710] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:49.855 [2024-11-27 06:31:09.296712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:49.855 [2024-11-27 06:31:09.296726] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:49.855 [2024-11-27 06:31:09.296748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.855 [2024-11-27 06:31:09.296761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bb1e0 with addr=10.0.0.3, port=4420 00:28:49.855 [2024-11-27 06:31:09.296765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10bb1e0 is same with the state(6) to be set 00:28:49.855 [2024-11-27 06:31:09.296772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bb1e0 (9): Bad file descriptor 00:28:49.855 [2024-11-27 06:31:09.296778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:49.855 [2024-11-27 06:31:09.296782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:49.855 [2024-11-27 06:31:09.296786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:49.855 [2024-11-27 06:31:09.296789] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:49.855 [2024-11-27 06:31:09.296792] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:49.855 [2024-11-27 06:31:09.296794] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:49.855 [2024-11-27 06:31:09.299482] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:28:49.855 [2024-11-27 06:31:09.299493] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:28:49.855 [2024-11-27 06:31:09.299496] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:28:49.855 [2024-11-27 06:31:09.299498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:28:49.855 [2024-11-27 06:31:09.299511] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:28:49.855 [2024-11-27 06:31:09.299540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.855 [2024-11-27 06:31:09.299547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f75e0 with addr=10.0.0.4, port=4420 00:28:49.855 [2024-11-27 06:31:09.299551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f75e0 is same with the state(6) to be set 00:28:49.855 [2024-11-27 06:31:09.299557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f75e0 (9): Bad file descriptor 00:28:49.855 [2024-11-27 06:31:09.299563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:28:49.855 [2024-11-27 06:31:09.299567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:28:49.855 [2024-11-27 06:31:09.299571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:28:49.855 [2024-11-27 06:31:09.299574] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:28:49.855 [2024-11-27 06:31:09.299576] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:28:49.855 [2024-11-27 06:31:09.299578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:28:49.855 [2024-11-27 06:31:09.306732] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:49.855 [2024-11-27 06:31:09.306746] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:49.855 [2024-11-27 06:31:09.306748] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:49.855 [2024-11-27 06:31:09.306750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:49.855 [2024-11-27 06:31:09.306764] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:49.855 [2024-11-27 06:31:09.306785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.855 [2024-11-27 06:31:09.306792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bb1e0 with addr=10.0.0.3, port=4420 00:28:49.855 [2024-11-27 06:31:09.306796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10bb1e0 is same with the state(6) to be set 00:28:49.855 [2024-11-27 06:31:09.306802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bb1e0 (9): Bad file descriptor 00:28:49.855 [2024-11-27 06:31:09.306808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:49.855 [2024-11-27 06:31:09.306812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:49.855 [2024-11-27 06:31:09.306816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:49.855 [2024-11-27 06:31:09.306819] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:49.855 [2024-11-27 06:31:09.306821] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:49.855 [2024-11-27 06:31:09.306823] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:49.855 [2024-11-27 06:31:09.309516] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:28:49.855 [2024-11-27 06:31:09.309528] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:28:49.855 [2024-11-27 06:31:09.309530] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:28:49.855 [2024-11-27 06:31:09.309533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:28:49.855 [2024-11-27 06:31:09.309547] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:28:49.855 [2024-11-27 06:31:09.309567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.855 [2024-11-27 06:31:09.309573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f75e0 with addr=10.0.0.4, port=4420 00:28:49.855 [2024-11-27 06:31:09.309577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f75e0 is same with the state(6) to be set 00:28:49.855 [2024-11-27 06:31:09.309583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f75e0 (9): Bad file descriptor 00:28:49.855 [2024-11-27 06:31:09.309589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:28:49.855 [2024-11-27 06:31:09.309593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:28:49.855 [2024-11-27 06:31:09.309597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:28:49.855 [2024-11-27 06:31:09.309601] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:28:49.855 [2024-11-27 06:31:09.309603] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:28:49.855 [2024-11-27 06:31:09.309605] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:28:49.855 [2024-11-27 06:31:09.316770] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:49.855 [2024-11-27 06:31:09.316782] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:49.855 [2024-11-27 06:31:09.316784] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:49.855 [2024-11-27 06:31:09.316786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:49.855 [2024-11-27 06:31:09.316801] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:49.855 [2024-11-27 06:31:09.316822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.855 [2024-11-27 06:31:09.316828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bb1e0 with addr=10.0.0.3, port=4420 00:28:49.855 [2024-11-27 06:31:09.316832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10bb1e0 is same with the state(6) to be set 00:28:49.855 [2024-11-27 06:31:09.316839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bb1e0 (9): Bad file descriptor 00:28:49.855 [2024-11-27 06:31:09.316845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:49.855 [2024-11-27 06:31:09.316848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:49.855 [2024-11-27 06:31:09.316852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:49.855 [2024-11-27 06:31:09.316856] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:49.855 [2024-11-27 06:31:09.316858] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:49.855 [2024-11-27 06:31:09.316860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:49.855 [2024-11-27 06:31:09.319552] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:28:49.855 [2024-11-27 06:31:09.319563] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:28:49.855 [2024-11-27 06:31:09.319566] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:28:49.855 [2024-11-27 06:31:09.319568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:28:49.855 [2024-11-27 06:31:09.319581] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:28:49.855 [2024-11-27 06:31:09.319600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.855 [2024-11-27 06:31:09.319606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f75e0 with addr=10.0.0.4, port=4420 00:28:49.855 [2024-11-27 06:31:09.319610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f75e0 is same with the state(6) to be set 00:28:49.855 [2024-11-27 06:31:09.319617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f75e0 (9): Bad file descriptor 00:28:49.855 [2024-11-27 06:31:09.319623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:28:49.855 [2024-11-27 06:31:09.319627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:28:49.855 [2024-11-27 06:31:09.319631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:28:49.855 [2024-11-27 06:31:09.319634] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:28:49.855 [2024-11-27 06:31:09.319636] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:28:49.855 [2024-11-27 06:31:09.319638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:28:49.856 [2024-11-27 06:31:09.326806] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:49.856 [2024-11-27 06:31:09.326822] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:49.856 [2024-11-27 06:31:09.326824] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:49.856 [2024-11-27 06:31:09.326826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:49.856 [2024-11-27 06:31:09.326841] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:49.856 [2024-11-27 06:31:09.326864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.856 [2024-11-27 06:31:09.326871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bb1e0 with addr=10.0.0.3, port=4420 00:28:49.856 [2024-11-27 06:31:09.326876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10bb1e0 is same with the state(6) to be set 00:28:49.856 [2024-11-27 06:31:09.326882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bb1e0 (9): Bad file descriptor 00:28:49.856 [2024-11-27 06:31:09.326889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:49.856 [2024-11-27 06:31:09.326892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:49.856 [2024-11-27 06:31:09.326896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:49.856 [2024-11-27 06:31:09.326899] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:49.856 [2024-11-27 06:31:09.326902] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:49.856 [2024-11-27 06:31:09.326911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:49.856 [2024-11-27 06:31:09.329586] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:28:49.856 [2024-11-27 06:31:09.329598] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:28:49.856 [2024-11-27 06:31:09.329601] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:28:49.856 [2024-11-27 06:31:09.329603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:28:49.856 [2024-11-27 06:31:09.329618] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:28:49.856 [2024-11-27 06:31:09.329639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.856 [2024-11-27 06:31:09.329646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f75e0 with addr=10.0.0.4, port=4420 00:28:49.856 [2024-11-27 06:31:09.329650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f75e0 is same with the state(6) to be set 00:28:49.856 [2024-11-27 06:31:09.329656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f75e0 (9): Bad file descriptor 00:28:49.856 [2024-11-27 06:31:09.329663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:28:49.856 [2024-11-27 06:31:09.329666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:28:49.856 [2024-11-27 06:31:09.329670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:28:49.856 [2024-11-27 06:31:09.329673] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:28:49.856 [2024-11-27 06:31:09.329676] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:28:49.856 [2024-11-27 06:31:09.329678] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:28:49.856 [2024-11-27 06:31:09.336846] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:49.856 [2024-11-27 06:31:09.336859] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:49.856 [2024-11-27 06:31:09.336861] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:49.856 [2024-11-27 06:31:09.336864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:49.856 [2024-11-27 06:31:09.336877] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:49.856 [2024-11-27 06:31:09.336898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.856 [2024-11-27 06:31:09.336912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bb1e0 with addr=10.0.0.3, port=4420 00:28:49.856 [2024-11-27 06:31:09.336916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10bb1e0 is same with the state(6) to be set 00:28:49.856 [2024-11-27 06:31:09.336923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bb1e0 (9): Bad file descriptor 00:28:49.856 [2024-11-27 06:31:09.336929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:49.856 [2024-11-27 06:31:09.336932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:49.856 [2024-11-27 06:31:09.336936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:49.856 [2024-11-27 06:31:09.336940] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:49.856 [2024-11-27 06:31:09.336942] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:49.856 [2024-11-27 06:31:09.336944] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:49.856 [2024-11-27 06:31:09.339623] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:28:49.856 [2024-11-27 06:31:09.339635] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:28:49.856 [2024-11-27 06:31:09.339637] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:28:49.856 [2024-11-27 06:31:09.339639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:28:49.856 [2024-11-27 06:31:09.339653] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:28:49.856 [2024-11-27 06:31:09.339672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.856 [2024-11-27 06:31:09.339678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f75e0 with addr=10.0.0.4, port=4420 00:28:49.856 [2024-11-27 06:31:09.339682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f75e0 is same with the state(6) to be set 00:28:49.856 [2024-11-27 06:31:09.339689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f75e0 (9): Bad file descriptor 00:28:49.856 [2024-11-27 06:31:09.339695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:28:49.856 [2024-11-27 06:31:09.339699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:28:49.856 [2024-11-27 06:31:09.339703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:28:49.856 [2024-11-27 06:31:09.339706] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:28:49.856 [2024-11-27 06:31:09.339708] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:28:49.856 [2024-11-27 06:31:09.339711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:28:49.856 [2024-11-27 06:31:09.346883] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:49.856 [2024-11-27 06:31:09.346896] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:49.856 [2024-11-27 06:31:09.346898] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:49.856 [2024-11-27 06:31:09.346900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:49.856 [2024-11-27 06:31:09.346919] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:49.856 [2024-11-27 06:31:09.346940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.856 [2024-11-27 06:31:09.346947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bb1e0 with addr=10.0.0.3, port=4420 00:28:49.856 [2024-11-27 06:31:09.346951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10bb1e0 is same with the state(6) to be set 00:28:49.856 [2024-11-27 06:31:09.346957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bb1e0 (9): Bad file descriptor 00:28:49.856 [2024-11-27 06:31:09.346963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:49.856 [2024-11-27 06:31:09.346967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:49.857 [2024-11-27 06:31:09.346970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:49.857 [2024-11-27 06:31:09.346974] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:49.857 [2024-11-27 06:31:09.346976] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:49.857 [2024-11-27 06:31:09.346978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:49.857 [2024-11-27 06:31:09.349658] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:28:49.857 [2024-11-27 06:31:09.349669] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:28:49.857 [2024-11-27 06:31:09.349672] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:28:49.857 [2024-11-27 06:31:09.349674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:28:49.857 [2024-11-27 06:31:09.349688] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:28:49.857 [2024-11-27 06:31:09.349708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.857 [2024-11-27 06:31:09.349714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f75e0 with addr=10.0.0.4, port=4420 00:28:49.857 [2024-11-27 06:31:09.349718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f75e0 is same with the state(6) to be set 00:28:49.857 [2024-11-27 06:31:09.349724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f75e0 (9): Bad file descriptor 00:28:49.857 [2024-11-27 06:31:09.349730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:28:49.857 [2024-11-27 06:31:09.349734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:28:49.857 [2024-11-27 06:31:09.349738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:28:49.857 [2024-11-27 06:31:09.349741] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:28:49.857 [2024-11-27 06:31:09.349743] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:28:49.857 [2024-11-27 06:31:09.349745] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:28:49.857 [2024-11-27 06:31:09.356925] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:49.857 [2024-11-27 06:31:09.356938] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:49.857 [2024-11-27 06:31:09.356940] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:49.857 [2024-11-27 06:31:09.356943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:49.857 [2024-11-27 06:31:09.356957] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:49.857 [2024-11-27 06:31:09.356979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.857 [2024-11-27 06:31:09.356986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bb1e0 with addr=10.0.0.3, port=4420 00:28:49.857 [2024-11-27 06:31:09.356989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10bb1e0 is same with the state(6) to be set 00:28:49.857 [2024-11-27 06:31:09.356996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bb1e0 (9): Bad file descriptor 00:28:49.857 [2024-11-27 06:31:09.357002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:49.857 [2024-11-27 06:31:09.357006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:49.857 [2024-11-27 06:31:09.357010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:49.857 [2024-11-27 06:31:09.357013] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:49.857 [2024-11-27 06:31:09.357015] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:49.857 [2024-11-27 06:31:09.357017] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:49.857 [2024-11-27 06:31:09.359694] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:28:49.857 [2024-11-27 06:31:09.359706] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:28:49.857 [2024-11-27 06:31:09.359708] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:28:49.857 [2024-11-27 06:31:09.359710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:28:49.857 [2024-11-27 06:31:09.359723] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:28:49.857 [2024-11-27 06:31:09.359743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.857 [2024-11-27 06:31:09.359749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f75e0 with addr=10.0.0.4, port=4420 00:28:49.857 [2024-11-27 06:31:09.359753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f75e0 is same with the state(6) to be set 00:28:49.857 [2024-11-27 06:31:09.359760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f75e0 (9): Bad file descriptor 00:28:49.857 [2024-11-27 06:31:09.359766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:28:49.857 [2024-11-27 06:31:09.359769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:28:49.857 [2024-11-27 06:31:09.359773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:28:49.857 [2024-11-27 06:31:09.359776] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:28:49.857 [2024-11-27 06:31:09.359779] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:28:49.857 [2024-11-27 06:31:09.359781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:28:49.857 [2024-11-27 06:31:09.366551] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:28:49.857 [2024-11-27 06:31:09.366569] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:28:49.857 [2024-11-27 06:31:09.366578] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:28:49.857 [2024-11-27 06:31:09.366595] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 not found 00:28:49.857 [2024-11-27 06:31:09.366603] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:28:49.857 [2024-11-27 06:31:09.366610] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:28:50.116 [2024-11-27 06:31:09.452623] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:28:50.116 [2024-11-27 06:31:09.452654] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:28:51.052 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # get_subsystem_names 00:28:51.052 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:51.052 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.052 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:28:51.052 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:28:51.052 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:51.052 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:28:51.052 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.052 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:28:51.052 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # get_bdev_list 00:28:51.052 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:28:51.052 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:51.052 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:28:51.052 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.052 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:51.052 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:28:51.052 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.052 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:28:51.052 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # get_subsystem_paths mdns0_nvme0 00:28:51.052 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:28:51.052 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.052 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:51.052 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:51.052 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:28:51.052 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:28:51.052 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.052 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # [[ 4421 == \4\4\2\1 ]] 00:28:51.052 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # get_subsystem_paths mdns1_nvme0 00:28:51.052 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:28:51.052 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.052 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:28:51.053 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:51.053 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:51.053 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:28:51.053 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.053 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # [[ 4421 == \4\4\2\1 ]] 00:28:51.053 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # get_notification_count 00:28:51.053 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:28:51.053 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:28:51.053 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.053 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:51.053 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.053 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:28:51.053 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:28:51.053 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # [[ 0 == 0 ]] 00:28:51.053 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@206 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:28:51.053 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.053 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:51.053 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.053 06:31:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@207 -- # sleep 1 00:28:51.053 [2024-11-27 06:31:10.444698] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:28:51.986 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # get_mdns_discovery_svcs 00:28:51.986 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:28:51.986 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:28:51.986 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:28:51.986 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:28:51.986 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.986 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:51.986 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.986 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # [[ '' == '' ]] 00:28:51.986 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # get_subsystem_names 00:28:51.986 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:51.986 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.986 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:28:51.986 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:51.986 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:28:51.986 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:28:51.986 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.986 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # [[ '' == '' ]] 00:28:51.986 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # get_bdev_list 00:28:51.986 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:51.986 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.986 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:51.986 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:28:51.986 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:28:51.986 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:28:52.244 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.244 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # [[ '' == '' ]] 00:28:52.244 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@212 -- # get_notification_count 00:28:52.244 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:28:52.244 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.244 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.244 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:28:52.244 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.244 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=4 00:28:52.244 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=8 00:28:52.244 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@213 -- # [[ 4 == 4 ]] 00:28:52.244 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@216 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:28:52.244 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.244 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.244 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.244 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@217 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:28:52.244 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # local es=0 00:28:52.244 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:28:52.244 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:52.244 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:52.244 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:52.244 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:52.244 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:28:52.244 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.244 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:52.244 [2024-11-27 06:31:11.594678] bdev_mdns_client.c: 471:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:28:52.244 2024/11/27 06:31:11 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:28:52.244 request: 00:28:52.244 { 00:28:52.244 "method": "bdev_nvme_start_mdns_discovery", 00:28:52.244 "params": { 00:28:52.244 "name": "mdns", 00:28:52.244 "svcname": "_nvme-disc._http", 00:28:52.244 "hostnqn": "nqn.2021-12.io.spdk:test" 00:28:52.244 } 00:28:52.244 } 00:28:52.244 Got JSON-RPC error response 00:28:52.244 GoRPCClient: error on JSON-RPC call 00:28:52.244 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:52.244 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # es=1 00:28:52.244 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:52.244 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:52.244 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:52.244 06:31:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@218 -- # sleep 5 00:28:52.811 [2024-11-27 06:31:12.183414] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:28:52.811 [2024-11-27 06:31:12.283411] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:28:53.105 [2024-11-27 06:31:12.383417] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:28:53.105 [2024-11-27 06:31:12.383431] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:28:53.105 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:28:53.105 cookie is 0 00:28:53.105 is_local: 1 00:28:53.105 our_own: 0 00:28:53.105 wide_area: 0 00:28:53.105 multicast: 1 00:28:53.105 cached: 1 00:28:53.105 [2024-11-27 06:31:12.483421] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:28:53.105 [2024-11-27 06:31:12.483439] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:28:53.105 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:28:53.105 cookie is 0 00:28:53.105 is_local: 1 00:28:53.105 our_own: 0 00:28:53.105 wide_area: 0 00:28:53.105 multicast: 1 00:28:53.105 cached: 1 00:28:53.105 [2024-11-27 06:31:12.483446] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:28:53.105 [2024-11-27 06:31:12.583428] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:28:53.105 [2024-11-27 06:31:12.583452] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:28:53.105 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:28:53.105 cookie is 0 00:28:53.105 is_local: 1 00:28:53.105 our_own: 0 00:28:53.105 wide_area: 0 00:28:53.105 multicast: 1 00:28:53.105 cached: 1 00:28:53.390 [2024-11-27 06:31:12.683435] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:28:53.390 [2024-11-27 06:31:12.683470] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:28:53.390 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:28:53.390 cookie is 0 00:28:53.390 is_local: 1 00:28:53.390 our_own: 0 00:28:53.390 wide_area: 0 00:28:53.390 multicast: 1 00:28:53.390 cached: 1 00:28:53.390 [2024-11-27 06:31:12.683477] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:28:53.957 [2024-11-27 06:31:13.388117] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:28:53.957 [2024-11-27 06:31:13.388142] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:28:53.957 [2024-11-27 06:31:13.388151] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:28:53.957 [2024-11-27 06:31:13.474200] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new subsystem mdns0_nvme0 00:28:54.215 [2024-11-27 06:31:13.532399] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] ctrlr was created to 10.0.0.4:4421 00:28:54.215 [2024-11-27 06:31:13.532740] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] Connecting qpair 0x10b4eb0:1 started. 00:28:54.215 [2024-11-27 06:31:13.533786] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:28:54.215 [2024-11-27 06:31:13.533806] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:28:54.215 [2024-11-27 06:31:13.536985] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] qpair 0x10b4eb0 was disconnected and freed. delete nvme_qpair. 00:28:54.215 [2024-11-27 06:31:13.588001] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:28:54.215 [2024-11-27 06:31:13.588017] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:28:54.215 [2024-11-27 06:31:13.588025] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:28:54.215 [2024-11-27 06:31:13.674076] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem mdns1_nvme0 00:28:54.215 [2024-11-27 06:31:13.732255] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:28:54.215 [2024-11-27 06:31:13.732570] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x10c74c0:1 started. 00:28:54.215 [2024-11-27 06:31:13.733620] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:28:54.215 [2024-11-27 06:31:13.733640] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:28:54.215 [2024-11-27 06:31:13.737009] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x10c74c0 was disconnected and freed. delete nvme_qpair. 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # get_mdns_discovery_svcs 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # [[ mdns == \m\d\n\s ]] 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # get_discovery_ctrlrs 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # get_bdev_list 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@225 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # local es=0 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:57.498 [2024-11-27 06:31:16.716057] bdev_mdns_client.c: 476:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:28:57.498 request: 00:28:57.498 { 00:28:57.498 "method": "bdev_nvme_start_mdns_discovery", 00:28:57.498 "params": { 00:28:57.498 "name": "cdc", 00:28:57.498 "svcname": "_nvme-disc._tcp", 00:28:57.498 "hostnqn": "nqn.2021-12.io.spdk:test" 00:28:57.498 } 00:28:57.498 } 00:28:57.498 Got JSON-RPC error response 00:28:57.498 GoRPCClient: error on JSON-RPC call 00:28:57.498 2024/11/27 06:31:16 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # es=1 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # get_discovery_ctrlrs 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:57.498 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # get_bdev_list 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@228 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@231 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 found 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:28:57.499 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:28:57.499 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:28:57.499 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:28:57.499 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:28:57.499 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:28:57.499 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:28:57.499 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@232 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.499 06:31:16 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@234 -- # sleep 1 00:28:57.499 [2024-11-27 06:31:16.883452] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:28:58.434 06:31:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@236 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 'not found' 00:28:58.434 06:31:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:28:58.434 06:31:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 00:28:58.434 06:31:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:28:58.434 06:31:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:28:58.434 06:31:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:28:58.434 06:31:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:28:58.434 06:31:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:28:58.434 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:28:58.434 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:28:58.434 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:28:58.434 06:31:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:28:58.434 06:31:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:58.434 06:31:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:28:58.434 06:31:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:58.434 06:31:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:28:58.434 06:31:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:58.434 06:31:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:28:58.434 06:31:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:28:58.434 06:31:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:28:58.434 06:31:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:28:58.434 06:31:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:28:58.434 06:31:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@238 -- # rpc_cmd nvmf_stop_mdns_prr 00:28:58.434 06:31:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.434 06:31:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:58.434 06:31:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.434 06:31:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@240 -- # trap - SIGINT SIGTERM EXIT 00:28:58.434 06:31:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@242 -- # kill 114126 00:28:58.434 06:31:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@245 -- # wait 114126 00:28:58.434 06:31:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@246 -- # kill 114155 00:28:58.434 Got SIGTERM, quitting. 06:31:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@247 -- # nvmftestfini 00:28:58.434 00:28:58.434 06:31:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:58.434 06:31:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # sync 00:28:58.434 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 00:28:58.434 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 00:28:58.434 avahi-daemon 0.8 exiting. 00:28:58.692 06:31:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:58.692 06:31:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set +e 00:28:58.692 06:31:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:58.692 06:31:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:58.692 rmmod nvme_tcp 00:28:58.692 rmmod nvme_fabrics 00:28:58.692 rmmod nvme_keyring 00:28:58.692 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:58.692 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@128 -- # set -e 00:28:58.692 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@129 -- # return 0 00:28:58.692 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@517 -- # '[' -n 114076 ']' 00:28:58.693 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@518 -- # killprocess 114076 00:28:58.693 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # '[' -z 114076 ']' 00:28:58.693 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # kill -0 114076 00:28:58.693 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@959 -- # uname 00:28:58.693 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:58.693 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 114076 00:28:58.693 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:58.693 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:58.693 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 114076' 00:28:58.693 killing process with pid 114076 00:28:58.693 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@973 -- # kill 114076 00:28:58.693 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@978 -- # wait 114076 00:28:58.693 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:58.693 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:58.693 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:58.693 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@297 -- # iptr 00:28:58.693 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:58.693 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # iptables-save 00:28:58.693 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:28:58.693 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:58.693 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:58.693 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:58.693 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:58.693 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:58.693 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:58.951 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:58.951 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:58.951 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:58.951 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:58.951 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:58.951 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:58.951 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:58.951 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:58.951 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:58.951 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:58.951 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.951 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.951 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.951 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@300 -- # return 0 00:28:58.951 00:28:58.951 real 0m21.565s 00:28:58.951 user 0m41.713s 00:28:58.951 sys 0m1.575s 00:28:58.951 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:58.951 06:31:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:58.951 ************************************ 00:28:58.951 END TEST nvmf_mdns_discovery 00:28:58.951 ************************************ 00:28:58.951 06:31:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:28:58.951 06:31:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:28:58.951 06:31:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:58.951 06:31:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:58.951 06:31:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.951 ************************************ 00:28:58.951 START TEST nvmf_host_multipath 00:28:58.951 ************************************ 00:28:58.951 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:28:59.211 * Looking for test storage... 00:28:59.211 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:59.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.211 --rc genhtml_branch_coverage=1 00:28:59.211 --rc genhtml_function_coverage=1 00:28:59.211 --rc genhtml_legend=1 00:28:59.211 --rc geninfo_all_blocks=1 00:28:59.211 --rc geninfo_unexecuted_blocks=1 00:28:59.211 00:28:59.211 ' 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:59.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.211 --rc genhtml_branch_coverage=1 00:28:59.211 --rc genhtml_function_coverage=1 00:28:59.211 --rc genhtml_legend=1 00:28:59.211 --rc geninfo_all_blocks=1 00:28:59.211 --rc geninfo_unexecuted_blocks=1 00:28:59.211 00:28:59.211 ' 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:59.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.211 --rc genhtml_branch_coverage=1 00:28:59.211 --rc genhtml_function_coverage=1 00:28:59.211 --rc genhtml_legend=1 00:28:59.211 --rc geninfo_all_blocks=1 00:28:59.211 --rc geninfo_unexecuted_blocks=1 00:28:59.211 00:28:59.211 ' 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:59.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.211 --rc genhtml_branch_coverage=1 00:28:59.211 --rc genhtml_function_coverage=1 00:28:59.211 --rc genhtml_legend=1 00:28:59.211 --rc geninfo_all_blocks=1 00:28:59.211 --rc geninfo_unexecuted_blocks=1 00:28:59.211 00:28:59.211 ' 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.211 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:59.212 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:59.212 Cannot find device "nvmf_init_br" 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:59.212 Cannot find device "nvmf_init_br2" 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:59.212 Cannot find device "nvmf_tgt_br" 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:59.212 Cannot find device "nvmf_tgt_br2" 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:59.212 Cannot find device "nvmf_init_br" 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:59.212 Cannot find device "nvmf_init_br2" 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:59.212 Cannot find device "nvmf_tgt_br" 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:59.212 Cannot find device "nvmf_tgt_br2" 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:59.212 Cannot find device "nvmf_br" 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:59.212 Cannot find device "nvmf_init_if" 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:59.212 Cannot find device "nvmf_init_if2" 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:59.212 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:59.212 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:59.212 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:59.471 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:59.471 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:59.471 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:59.471 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:59.471 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:59.471 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:59.471 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:59.471 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:59.471 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:59.471 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:59.471 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:59.471 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:59.471 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:59.471 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:59.471 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:59.471 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:59.471 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:59.471 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:59.471 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:59.471 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:59.471 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:59.471 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:59.471 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:59.471 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:59.471 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:59.471 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:59.471 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:59.471 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:59.471 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:59.471 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:28:59.471 00:28:59.471 --- 10.0.0.3 ping statistics --- 00:28:59.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.471 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:28:59.471 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:59.471 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:59.471 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 00:28:59.471 00:28:59.471 --- 10.0.0.4 ping statistics --- 00:28:59.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.471 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:28:59.471 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:59.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:59.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:28:59.471 00:28:59.471 --- 10.0.0.1 ping statistics --- 00:28:59.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.471 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:28:59.471 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:59.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:59.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:28:59.471 00:28:59.471 --- 10.0.0.2 ping statistics --- 00:28:59.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.471 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:28:59.472 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:59.472 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:28:59.472 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:59.472 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:59.472 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:59.472 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:59.472 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:59.472 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:59.472 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:59.472 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:28:59.472 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:59.472 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:59.472 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:59.472 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=114789 00:28:59.472 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:28:59.472 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 114789 00:28:59.472 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 114789 ']' 00:28:59.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.472 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.472 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:59.472 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.472 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:59.472 06:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:59.472 [2024-11-27 06:31:18.947242] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:28:59.472 [2024-11-27 06:31:18.947295] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:59.730 [2024-11-27 06:31:19.062684] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:59.730 [2024-11-27 06:31:19.092642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:59.730 [2024-11-27 06:31:19.108451] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:59.730 [2024-11-27 06:31:19.108563] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:59.730 [2024-11-27 06:31:19.108601] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:59.730 [2024-11-27 06:31:19.108632] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:59.730 [2024-11-27 06:31:19.108661] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:59.730 [2024-11-27 06:31:19.109370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.730 [2024-11-27 06:31:19.109556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.296 06:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:00.296 06:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:29:00.296 06:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:00.296 06:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:00.296 06:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:00.554 06:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:00.554 06:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=114789 00:29:00.554 06:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:00.554 [2024-11-27 06:31:20.037136] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:00.554 06:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:00.812 Malloc0 00:29:00.812 06:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:29:01.070 06:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:01.329 06:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:01.329 [2024-11-27 06:31:20.837868] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:01.329 06:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:29:01.587 [2024-11-27 06:31:21.041976] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:29:01.587 06:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:29:01.587 06:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=114883 00:29:01.587 06:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:01.587 06:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 114883 /var/tmp/bdevperf.sock 00:29:01.587 06:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 114883 ']' 00:29:01.587 06:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:01.587 06:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:01.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:01.587 06:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:01.587 06:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:01.587 06:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:02.565 06:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:02.565 06:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:29:02.565 06:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:29:02.824 06:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:29:03.082 Nvme0n1 00:29:03.082 06:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:29:03.341 Nvme0n1 00:29:03.341 06:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:29:03.341 06:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:29:04.719 06:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:29:04.719 06:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:29:04.719 06:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:29:04.719 06:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:29:04.719 06:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 114789 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:04.719 06:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=114970 00:29:04.719 06:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:29:11.283 06:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:29:11.283 06:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:29:11.283 06:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:29:11.283 06:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:11.283 Attaching 4 probes... 00:29:11.283 @path[10.0.0.3, 4421]: 26229 00:29:11.283 @path[10.0.0.3, 4421]: 26265 00:29:11.283 @path[10.0.0.3, 4421]: 26353 00:29:11.283 @path[10.0.0.3, 4421]: 26351 00:29:11.283 @path[10.0.0.3, 4421]: 26298 00:29:11.283 06:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:29:11.283 06:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:29:11.283 06:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:29:11.283 06:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:29:11.283 06:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:29:11.283 06:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:29:11.283 06:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 114970 00:29:11.283 06:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:11.283 06:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:29:11.283 06:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:29:11.283 06:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:29:11.542 06:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:29:11.542 06:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 114789 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:11.542 06:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=115107 00:29:11.542 06:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:29:18.118 06:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:29:18.118 06:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:29:18.118 06:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:29:18.118 06:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:18.118 Attaching 4 probes... 00:29:18.118 @path[10.0.0.3, 4420]: 25766 00:29:18.118 @path[10.0.0.3, 4420]: 26220 00:29:18.118 @path[10.0.0.3, 4420]: 26091 00:29:18.118 @path[10.0.0.3, 4420]: 26196 00:29:18.118 @path[10.0.0.3, 4420]: 26312 00:29:18.118 06:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:29:18.118 06:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:29:18.118 06:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:29:18.118 06:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:29:18.118 06:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:29:18.118 06:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:29:18.118 06:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 115107 00:29:18.118 06:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:18.118 06:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:29:18.118 06:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:29:18.118 06:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:29:18.118 06:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:29:18.118 06:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=115237 00:29:18.118 06:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:29:18.118 06:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 114789 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:24.693 06:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:29:24.693 06:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:29:24.693 06:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:29:24.693 06:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:24.693 Attaching 4 probes... 00:29:24.693 @path[10.0.0.3, 4421]: 15657 00:29:24.693 @path[10.0.0.3, 4421]: 25838 00:29:24.693 @path[10.0.0.3, 4421]: 26230 00:29:24.693 @path[10.0.0.3, 4421]: 26162 00:29:24.693 @path[10.0.0.3, 4421]: 25977 00:29:24.693 06:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:29:24.693 06:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:29:24.693 06:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:29:24.693 06:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:29:24.693 06:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:29:24.693 06:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:29:24.693 06:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 115237 00:29:24.693 06:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:24.693 06:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:29:24.693 06:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:29:24.693 06:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:29:24.693 06:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:29:24.693 06:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=115368 00:29:24.693 06:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:29:24.693 06:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 114789 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:31.385 06:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:29:31.385 06:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:29:31.385 06:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:29:31.385 06:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:31.385 Attaching 4 probes... 00:29:31.385 00:29:31.385 00:29:31.385 00:29:31.385 00:29:31.385 00:29:31.385 06:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:29:31.385 06:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:29:31.385 06:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:29:31.385 06:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:29:31.385 06:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:29:31.385 06:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:29:31.385 06:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 115368 00:29:31.385 06:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:31.385 06:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:29:31.385 06:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:29:31.385 06:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:29:31.385 06:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:29:31.385 06:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 114789 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:31.385 06:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=115498 00:29:31.385 06:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:29:37.948 06:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:29:37.948 06:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:29:37.948 06:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:29:37.948 06:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:37.948 Attaching 4 probes... 00:29:37.949 @path[10.0.0.3, 4421]: 25015 00:29:37.949 @path[10.0.0.3, 4421]: 25715 00:29:37.949 @path[10.0.0.3, 4421]: 25998 00:29:37.949 @path[10.0.0.3, 4421]: 25461 00:29:37.949 @path[10.0.0.3, 4421]: 25507 00:29:37.949 06:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:29:37.949 06:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:29:37.949 06:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:29:37.949 06:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:29:37.949 06:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:29:37.949 06:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:29:37.949 06:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 115498 00:29:37.949 06:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:37.949 06:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:29:37.949 [2024-11-27 06:31:57.165321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.949 [2024-11-27 06:31:57.165558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 [2024-11-27 06:31:57.165784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa86620 is same with the state(6) to be set 00:29:37.950 06:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:29:38.885 06:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:29:38.885 06:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=115627 00:29:38.885 06:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:29:38.885 06:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 114789 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:45.444 06:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:29:45.444 06:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:29:45.444 06:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:29:45.444 06:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:45.444 Attaching 4 probes... 00:29:45.444 @path[10.0.0.3, 4420]: 25196 00:29:45.444 @path[10.0.0.3, 4420]: 25547 00:29:45.444 @path[10.0.0.3, 4420]: 25915 00:29:45.444 @path[10.0.0.3, 4420]: 25766 00:29:45.444 @path[10.0.0.3, 4420]: 25912 00:29:45.444 06:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:29:45.444 06:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:29:45.444 06:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:29:45.444 06:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:29:45.444 06:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:29:45.444 06:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:29:45.444 06:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 115627 00:29:45.444 06:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:45.444 06:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:29:45.444 [2024-11-27 06:32:04.603331] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:29:45.444 06:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:29:45.444 06:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:29:52.041 06:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:29:52.042 06:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=115826 00:29:52.042 06:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:29:52.042 06:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 114789 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:58.610 06:32:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:29:58.610 06:32:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:29:58.610 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:29:58.610 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:58.610 Attaching 4 probes... 00:29:58.610 @path[10.0.0.3, 4421]: 25115 00:29:58.610 @path[10.0.0.3, 4421]: 25606 00:29:58.610 @path[10.0.0.3, 4421]: 25556 00:29:58.610 @path[10.0.0.3, 4421]: 25538 00:29:58.610 @path[10.0.0.3, 4421]: 25362 00:29:58.610 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:29:58.610 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:29:58.610 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:29:58.610 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:29:58.610 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:29:58.610 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:29:58.610 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 115826 00:29:58.610 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:58.610 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 114883 00:29:58.610 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 114883 ']' 00:29:58.610 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 114883 00:29:58.610 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:29:58.610 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:58.610 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 114883 00:29:58.610 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:29:58.610 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:29:58.610 killing process with pid 114883 00:29:58.610 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 114883' 00:29:58.610 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 114883 00:29:58.610 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 114883 00:29:58.610 { 00:29:58.610 "results": [ 00:29:58.610 { 00:29:58.610 "job": "Nvme0n1", 00:29:58.610 "core_mask": "0x4", 00:29:58.610 "workload": "verify", 00:29:58.610 "status": "terminated", 00:29:58.610 "verify_range": { 00:29:58.610 "start": 0, 00:29:58.610 "length": 16384 00:29:58.610 }, 00:29:58.610 "queue_depth": 128, 00:29:58.610 "io_size": 4096, 00:29:58.610 "runtime": 54.188794, 00:29:58.610 "iops": 11022.426518663618, 00:29:58.610 "mibps": 43.05635358852976, 00:29:58.610 "io_failed": 0, 00:29:58.610 "io_timeout": 0, 00:29:58.610 "avg_latency_us": 11591.309693936582, 00:29:58.610 "min_latency_us": 1380.0369230769231, 00:29:58.610 "max_latency_us": 7020619.618461538 00:29:58.610 } 00:29:58.610 ], 00:29:58.610 "core_count": 1 00:29:58.610 } 00:29:58.610 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 114883 00:29:58.610 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:29:58.610 [2024-11-27 06:31:21.091192] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:29:58.610 [2024-11-27 06:31:21.091254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114883 ] 00:29:58.610 [2024-11-27 06:31:21.204655] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:58.610 [2024-11-27 06:31:21.230963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.610 [2024-11-27 06:31:21.249402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:58.610 Running I/O for 90 seconds... 00:29:58.610 13302.00 IOPS, 51.96 MiB/s [2024-11-27T06:32:18.140Z] 13394.50 IOPS, 52.32 MiB/s [2024-11-27T06:32:18.140Z] 13378.33 IOPS, 52.26 MiB/s [2024-11-27T06:32:18.140Z] 13316.50 IOPS, 52.02 MiB/s [2024-11-27T06:32:18.140Z] 13284.40 IOPS, 51.89 MiB/s [2024-11-27T06:32:18.140Z] 13253.83 IOPS, 51.77 MiB/s [2024-11-27T06:32:18.140Z] 13243.29 IOPS, 51.73 MiB/s [2024-11-27T06:32:18.140Z] [2024-11-27 06:31:30.842272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:54216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.610 [2024-11-27 06:31:30.842313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:58.610 [2024-11-27 06:31:30.842346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.610 [2024-11-27 06:31:30.842354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:58.610 [2024-11-27 06:31:30.842366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:54432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.610 [2024-11-27 06:31:30.842373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:58.610 [2024-11-27 06:31:30.842385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:54440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.610 [2024-11-27 06:31:30.842392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:58.610 [2024-11-27 06:31:30.842403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.610 [2024-11-27 06:31:30.842410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:58.610 [2024-11-27 06:31:30.842421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:54456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.610 [2024-11-27 06:31:30.842428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:58.610 [2024-11-27 06:31:30.842440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:54464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.610 [2024-11-27 06:31:30.842446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:58.610 [2024-11-27 06:31:30.842457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:54472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.610 [2024-11-27 06:31:30.842464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:58.610 [2024-11-27 06:31:30.843678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.610 [2024-11-27 06:31:30.843787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:58.610 [2024-11-27 06:31:30.843837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:54488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.610 [2024-11-27 06:31:30.843891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:58.610 [2024-11-27 06:31:30.843949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:54496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.610 [2024-11-27 06:31:30.843987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:58.610 [2024-11-27 06:31:30.844026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:54504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.610 [2024-11-27 06:31:30.844057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:58.610 [2024-11-27 06:31:30.844092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:54512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.610 [2024-11-27 06:31:30.844130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.844166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:54520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.611 [2024-11-27 06:31:30.844201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.844238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.611 [2024-11-27 06:31:30.844271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.844313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:54536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.611 [2024-11-27 06:31:30.844348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.844516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:54544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.611 [2024-11-27 06:31:30.844567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.844606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:54552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.611 [2024-11-27 06:31:30.844638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.844677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:54560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.611 [2024-11-27 06:31:30.844709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.844751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:54568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.611 [2024-11-27 06:31:30.844798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.844840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:54576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.611 [2024-11-27 06:31:30.844874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.844918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:54584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.611 [2024-11-27 06:31:30.844970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.845012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.611 [2024-11-27 06:31:30.845049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.845085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:54600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.611 [2024-11-27 06:31:30.845118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.845154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:54608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.611 [2024-11-27 06:31:30.845190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.845228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:54616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.611 [2024-11-27 06:31:30.845259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.845298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.611 [2024-11-27 06:31:30.845330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.845368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:54632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.611 [2024-11-27 06:31:30.845407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.845442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.611 [2024-11-27 06:31:30.845472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.845509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:54648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.611 [2024-11-27 06:31:30.845537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.845574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:54656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.611 [2024-11-27 06:31:30.845606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.845642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:54664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.611 [2024-11-27 06:31:30.845673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.845709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:54672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.611 [2024-11-27 06:31:30.845742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.845779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:54680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.611 [2024-11-27 06:31:30.845812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.845852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:54688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.611 [2024-11-27 06:31:30.845885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.845937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:54696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.611 [2024-11-27 06:31:30.845971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.846008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:54704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.611 [2024-11-27 06:31:30.846048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.846084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.611 [2024-11-27 06:31:30.846117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.846156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:54720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.611 [2024-11-27 06:31:30.846188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.846224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.611 [2024-11-27 06:31:30.846254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.846289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:54736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.611 [2024-11-27 06:31:30.846325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.846361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:54744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.611 [2024-11-27 06:31:30.846392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.846428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:54752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.611 [2024-11-27 06:31:30.846457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.846491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:54760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.611 [2024-11-27 06:31:30.846521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.846559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:54768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.611 [2024-11-27 06:31:30.846591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.846624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.611 [2024-11-27 06:31:30.846660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.846701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:54784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.611 [2024-11-27 06:31:30.846732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.846769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:54792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.611 [2024-11-27 06:31:30.846799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.846835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.611 [2024-11-27 06:31:30.846867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.846899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.611 [2024-11-27 06:31:30.846946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.846982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:54816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.611 [2024-11-27 06:31:30.847010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:58.611 [2024-11-27 06:31:30.847048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:54824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.612 [2024-11-27 06:31:30.847081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:58.612 [2024-11-27 06:31:30.847117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:54832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.612 [2024-11-27 06:31:30.847146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:58.612 [2024-11-27 06:31:30.847179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:54840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.612 [2024-11-27 06:31:30.847211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:58.612 [2024-11-27 06:31:30.847245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:54848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.612 [2024-11-27 06:31:30.847278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:58.612 [2024-11-27 06:31:30.847313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:54856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.612 [2024-11-27 06:31:30.847350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:58.612 [2024-11-27 06:31:30.847385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:54864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.612 [2024-11-27 06:31:30.847417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:58.612 [2024-11-27 06:31:30.847450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.612 [2024-11-27 06:31:30.847483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:58.612 [2024-11-27 06:31:30.847518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:54880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.612 [2024-11-27 06:31:30.847558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:58.612 [2024-11-27 06:31:30.847594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:54888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.612 [2024-11-27 06:31:30.847629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:58.612 [2024-11-27 06:31:30.847664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:54896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.612 [2024-11-27 06:31:30.847693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:58.612 [2024-11-27 06:31:30.847726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:54904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.612 [2024-11-27 06:31:30.847759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:58.612 [2024-11-27 06:31:30.847796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:54912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.612 [2024-11-27 06:31:30.847827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:58.612 [2024-11-27 06:31:30.847861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:54920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.612 [2024-11-27 06:31:30.847890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:58.612 [2024-11-27 06:31:30.847933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:54928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.612 [2024-11-27 06:31:30.847966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:58.612 [2024-11-27 06:31:30.848004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:54224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.612 [2024-11-27 06:31:30.848038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:58.612 [2024-11-27 06:31:30.848073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:54232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.612 [2024-11-27 06:31:30.848102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:58.612 [2024-11-27 06:31:30.848140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:54240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.612 [2024-11-27 06:31:30.848175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:58.612 [2024-11-27 06:31:30.848207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:54248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.612 [2024-11-27 06:31:30.848238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:58.612 [2024-11-27 06:31:30.848270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:54256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.612 [2024-11-27 06:31:30.848303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:58.612 [2024-11-27 06:31:30.848339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:54264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.612 [2024-11-27 06:31:30.848374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:58.612 [2024-11-27 06:31:30.848410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:54272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.612 [2024-11-27 06:31:30.848436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:58.612 [2024-11-27 06:31:30.848473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:54280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.612 [2024-11-27 06:31:30.848502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:58.612 [2024-11-27 06:31:30.848538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:54288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.612 [2024-11-27 06:31:30.848569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:58.612 [2024-11-27 06:31:30.848607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:54296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.612 [2024-11-27 06:31:30.848641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:58.612 [2024-11-27 06:31:30.848673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:54304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.612 [2024-11-27 06:31:30.848707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:58.612 [2024-11-27 06:31:30.848741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:54312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.612 [2024-11-27 06:31:30.848770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.612 [2024-11-27 06:31:30.848822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:54320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.612 [2024-11-27 06:31:30.848858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.612 [2024-11-27 06:31:30.848893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:54328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.612 [2024-11-27 06:31:30.848932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:58.612 [2024-11-27 06:31:30.848969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:54336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.612 [2024-11-27 06:31:30.848999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:58.612 [2024-11-27 06:31:30.849035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:54344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.612 [2024-11-27 06:31:30.849068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:58.612 [2024-11-27 06:31:30.849103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:54352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.612 [2024-11-27 06:31:30.849137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:58.612 [2024-11-27 06:31:30.849170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:54360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.612 [2024-11-27 06:31:30.849198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:58.612 [2024-11-27 06:31:30.849242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:54368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.612 [2024-11-27 06:31:30.849275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:58.612 [2024-11-27 06:31:30.849311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:54376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.612 [2024-11-27 06:31:30.849343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:58.612 [2024-11-27 06:31:30.849377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:54384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.612 [2024-11-27 06:31:30.849409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:58.612 [2024-11-27 06:31:30.849442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:54392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.612 [2024-11-27 06:31:30.849473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:58.612 [2024-11-27 06:31:30.849506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:54400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.612 [2024-11-27 06:31:30.849536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:58.612 [2024-11-27 06:31:30.849568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:54408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.612 [2024-11-27 06:31:30.849603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:58.612 [2024-11-27 06:31:30.849639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:54416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.613 [2024-11-27 06:31:30.849673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:58.613 13200.38 IOPS, 51.56 MiB/s [2024-11-27T06:32:18.143Z] 13182.78 IOPS, 51.50 MiB/s [2024-11-27T06:32:18.143Z] 13177.20 IOPS, 51.47 MiB/s [2024-11-27T06:32:18.143Z] 13166.55 IOPS, 51.43 MiB/s [2024-11-27T06:32:18.143Z] 13163.25 IOPS, 51.42 MiB/s [2024-11-27T06:32:18.143Z] 13149.54 IOPS, 51.37 MiB/s [2024-11-27T06:32:18.143Z] 13166.93 IOPS, 51.43 MiB/s [2024-11-27T06:32:18.143Z] [2024-11-27 06:31:37.279816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.613 [2024-11-27 06:31:37.280230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:58.613 [2024-11-27 06:31:37.280310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:73992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-27 06:31:37.280354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:58.613 [2024-11-27 06:31:37.280401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-27 06:31:37.280435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:58.613 [2024-11-27 06:31:37.280472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-27 06:31:37.280505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:58.613 [2024-11-27 06:31:37.280541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-27 06:31:37.280593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:58.613 [2024-11-27 06:31:37.280631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-27 06:31:37.280663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:58.613 [2024-11-27 06:31:37.280703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-27 06:31:37.280736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:58.613 [2024-11-27 06:31:37.280772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-27 06:31:37.280825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:58.613 [2024-11-27 06:31:37.280866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-27 06:31:37.280900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:58.613 [2024-11-27 06:31:37.280950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-27 06:31:37.280984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:58.613 [2024-11-27 06:31:37.281025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-27 06:31:37.281058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:58.613 [2024-11-27 06:31:37.281096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-27 06:31:37.281129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:58.613 [2024-11-27 06:31:37.281163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-27 06:31:37.281198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:58.613 [2024-11-27 06:31:37.281238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-27 06:31:37.281271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:58.613 [2024-11-27 06:31:37.281310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-27 06:31:37.281342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:58.613 [2024-11-27 06:31:37.281386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-27 06:31:37.281421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:58.613 [2024-11-27 06:31:37.281459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-27 06:31:37.281501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:58.613 [2024-11-27 06:31:37.281542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-27 06:31:37.281575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:58.613 [2024-11-27 06:31:37.281615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-27 06:31:37.281643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:58.613 [2024-11-27 06:31:37.281677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-27 06:31:37.281707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:58.613 [2024-11-27 06:31:37.281742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-27 06:31:37.281777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:58.613 [2024-11-27 06:31:37.281816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-27 06:31:37.281847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:58.613 [2024-11-27 06:31:37.281883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-27 06:31:37.281922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:58.613 [2024-11-27 06:31:37.281961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:74168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-27 06:31:37.281990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:58.613 [2024-11-27 06:31:37.282025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:74176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-27 06:31:37.282059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:58.613 [2024-11-27 06:31:37.282093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-27 06:31:37.282129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:58.613 [2024-11-27 06:31:37.282162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-27 06:31:37.282194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:58.613 [2024-11-27 06:31:37.282232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:74200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-27 06:31:37.282266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:58.613 [2024-11-27 06:31:37.282306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-27 06:31:37.282336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:58.613 [2024-11-27 06:31:37.282379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:74216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-27 06:31:37.282414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.613 [2024-11-27 06:31:37.282457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-27 06:31:37.282493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.613 [2024-11-27 06:31:37.282528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:74232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-27 06:31:37.282562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:58.613 [2024-11-27 06:31:37.282596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-27 06:31:37.282628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:58.613 [2024-11-27 06:31:37.282666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-27 06:31:37.282699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:58.613 [2024-11-27 06:31:37.282831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-27 06:31:37.282875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:58.613 [2024-11-27 06:31:37.282938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-27 06:31:37.282973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:58.613 [2024-11-27 06:31:37.283012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-27 06:31:37.283042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:58.614 [2024-11-27 06:31:37.283077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-27 06:31:37.283117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:58.614 [2024-11-27 06:31:37.283158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-27 06:31:37.283193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:58.614 [2024-11-27 06:31:37.283231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-27 06:31:37.283260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:58.614 [2024-11-27 06:31:37.283292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-27 06:31:37.283322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:58.614 [2024-11-27 06:31:37.283368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-27 06:31:37.283402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:58.614 [2024-11-27 06:31:37.283433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-27 06:31:37.283465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:58.614 [2024-11-27 06:31:37.283500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-27 06:31:37.283535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:58.614 [2024-11-27 06:31:37.283573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-27 06:31:37.283608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:58.614 [2024-11-27 06:31:37.283642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-27 06:31:37.283677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:58.614 [2024-11-27 06:31:37.283717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-27 06:31:37.283750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:58.614 [2024-11-27 06:31:37.283788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-27 06:31:37.283818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:58.614 [2024-11-27 06:31:37.283855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-27 06:31:37.283893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:58.614 [2024-11-27 06:31:37.283916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-27 06:31:37.283923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:58.614 [2024-11-27 06:31:37.283937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-27 06:31:37.283943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:58.614 [2024-11-27 06:31:37.283956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-27 06:31:37.283964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:58.614 [2024-11-27 06:31:37.283977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-27 06:31:37.283983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:58.614 [2024-11-27 06:31:37.283996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-27 06:31:37.284008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:58.614 [2024-11-27 06:31:37.284021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-27 06:31:37.284028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:58.614 [2024-11-27 06:31:37.284041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-27 06:31:37.284047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:58.614 [2024-11-27 06:31:37.284060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-27 06:31:37.284067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:58.614 [2024-11-27 06:31:37.284080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-27 06:31:37.284086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:58.614 [2024-11-27 06:31:37.284099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-27 06:31:37.284106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:58.614 [2024-11-27 06:31:37.284119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-27 06:31:37.284126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:58.614 [2024-11-27 06:31:37.284139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-27 06:31:37.284145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:58.614 [2024-11-27 06:31:37.284158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-27 06:31:37.284165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.614 [2024-11-27 06:31:37.284178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-27 06:31:37.284184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.614 [2024-11-27 06:31:37.284197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-27 06:31:37.284203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:58.614 [2024-11-27 06:31:37.284217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-27 06:31:37.284223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:58.614 [2024-11-27 06:31:37.284236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-27 06:31:37.284245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:58.614 [2024-11-27 06:31:37.284259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-27 06:31:37.284265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:58.614 [2024-11-27 06:31:37.284278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-27 06:31:37.284285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:58.614 [2024-11-27 06:31:37.284298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-27 06:31:37.284304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:58.615 [2024-11-27 06:31:37.284317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-27 06:31:37.284323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:58.615 [2024-11-27 06:31:37.284336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-27 06:31:37.284343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:58.615 [2024-11-27 06:31:37.284356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-27 06:31:37.284362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:58.615 [2024-11-27 06:31:37.284375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-27 06:31:37.284381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:58.615 [2024-11-27 06:31:37.284394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-27 06:31:37.284401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:58.615 [2024-11-27 06:31:37.284415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-27 06:31:37.284421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:58.615 [2024-11-27 06:31:37.284434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-27 06:31:37.284441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:58.615 [2024-11-27 06:31:37.284454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-27 06:31:37.284460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:58.615 [2024-11-27 06:31:37.284473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-27 06:31:37.284480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:58.615 [2024-11-27 06:31:37.284496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-27 06:31:37.284502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:58.615 [2024-11-27 06:31:37.284516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-27 06:31:37.284522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:58.615 [2024-11-27 06:31:37.284536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-27 06:31:37.284542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:58.615 [2024-11-27 06:31:37.284555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-27 06:31:37.284562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:58.615 [2024-11-27 06:31:37.284575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-27 06:31:37.284583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:58.615 [2024-11-27 06:31:37.284596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-27 06:31:37.284602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:58.615 [2024-11-27 06:31:37.284616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-27 06:31:37.284622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:58.615 [2024-11-27 06:31:37.284635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-27 06:31:37.284642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:58.615 [2024-11-27 06:31:37.284655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.615 [2024-11-27 06:31:37.284661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:58.615 [2024-11-27 06:31:37.284675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.615 [2024-11-27 06:31:37.284681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:58.615 [2024-11-27 06:31:37.284695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:73912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.615 [2024-11-27 06:31:37.284701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:58.615 [2024-11-27 06:31:37.284715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.615 [2024-11-27 06:31:37.284721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:58.615 [2024-11-27 06:31:37.284737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.615 [2024-11-27 06:31:37.284744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:58.615 [2024-11-27 06:31:37.284757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:73936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.615 [2024-11-27 06:31:37.284764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:58.615 [2024-11-27 06:31:37.284777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:73944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.615 [2024-11-27 06:31:37.284784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-27 06:31:37.284797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.615 [2024-11-27 06:31:37.284811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.615 [2024-11-27 06:31:37.285500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.615 [2024-11-27 06:31:37.285511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.615 [2024-11-27 06:31:37.285526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:73968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.615 [2024-11-27 06:31:37.285532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:58.615 [2024-11-27 06:31:37.285548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:73976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.615 [2024-11-27 06:31:37.285554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:58.615 [2024-11-27 06:31:37.285709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.615 [2024-11-27 06:31:37.285720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:58.615 12630.67 IOPS, 49.34 MiB/s [2024-11-27T06:32:18.145Z] 12335.81 IOPS, 48.19 MiB/s [2024-11-27T06:32:18.145Z] 12370.24 IOPS, 48.32 MiB/s [2024-11-27T06:32:18.145Z] 12413.28 IOPS, 48.49 MiB/s [2024-11-27T06:32:18.145Z] 12448.79 IOPS, 48.63 MiB/s [2024-11-27T06:32:18.145Z] 12476.30 IOPS, 48.74 MiB/s [2024-11-27T06:32:18.145Z] 12499.95 IOPS, 48.83 MiB/s [2024-11-27T06:32:18.145Z] [2024-11-27 06:31:44.093804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-27 06:31:44.093851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:58.615 [2024-11-27 06:31:44.093882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-27 06:31:44.093890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:58.615 [2024-11-27 06:31:44.093911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-27 06:31:44.093919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:58.615 [2024-11-27 06:31:44.093931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-27 06:31:44.093952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:58.615 [2024-11-27 06:31:44.093964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-27 06:31:44.093971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:58.615 [2024-11-27 06:31:44.093983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-27 06:31:44.093990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:58.615 [2024-11-27 06:31:44.094001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-27 06:31:44.094008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.616 [2024-11-27 06:31:44.094025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.616 [2024-11-27 06:31:44.094044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.616 [2024-11-27 06:31:44.094062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.616 [2024-11-27 06:31:44.094080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.616 [2024-11-27 06:31:44.094099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.616 [2024-11-27 06:31:44.094117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.616 [2024-11-27 06:31:44.094135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-27 06:31:44.094153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-27 06:31:44.094171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-27 06:31:44.094193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-27 06:31:44.094212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-27 06:31:44.094231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-27 06:31:44.094249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-27 06:31:44.094268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-27 06:31:44.094287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-27 06:31:44.094305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-27 06:31:44.094323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-27 06:31:44.094341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-27 06:31:44.094359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-27 06:31:44.094378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-27 06:31:44.094396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-27 06:31:44.094417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-27 06:31:44.094435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-27 06:31:44.094453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-27 06:31:44.094472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-27 06:31:44.094490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-27 06:31:44.094508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-27 06:31:44.094528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-27 06:31:44.094547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-27 06:31:44.094566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-27 06:31:44.094584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-27 06:31:44.094603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-27 06:31:44.094621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-27 06:31:44.094653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-27 06:31:44.094671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:23040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-27 06:31:44.094690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-27 06:31:44.094709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-27 06:31:44.094727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:58.616 [2024-11-27 06:31:44.094739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.617 [2024-11-27 06:31:44.094746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:58.617 [2024-11-27 06:31:44.094965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.617 [2024-11-27 06:31:44.094978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:58.617 [2024-11-27 06:31:44.094993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.617 [2024-11-27 06:31:44.095000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:58.617 [2024-11-27 06:31:44.095015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.617 [2024-11-27 06:31:44.095022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:58.617 [2024-11-27 06:31:44.095036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.617 [2024-11-27 06:31:44.095042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:58.617 [2024-11-27 06:31:44.095057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.617 [2024-11-27 06:31:44.095063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:58.617 [2024-11-27 06:31:44.095077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.617 [2024-11-27 06:31:44.095084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:58.617 [2024-11-27 06:31:44.095098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.617 [2024-11-27 06:31:44.095112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:58.617 [2024-11-27 06:31:44.095127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.617 [2024-11-27 06:31:44.095134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:58.617 [2024-11-27 06:31:44.095148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.617 [2024-11-27 06:31:44.095154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:58.617 [2024-11-27 06:31:44.095168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.617 [2024-11-27 06:31:44.095175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:58.617 [2024-11-27 06:31:44.095189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.617 [2024-11-27 06:31:44.095196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:58.617 [2024-11-27 06:31:44.095210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.617 [2024-11-27 06:31:44.095217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:58.617 [2024-11-27 06:31:44.095231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.617 [2024-11-27 06:31:44.095237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:58.617 [2024-11-27 06:31:44.095251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.617 [2024-11-27 06:31:44.095258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:58.617 [2024-11-27 06:31:44.095272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.617 [2024-11-27 06:31:44.095279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:58.617 [2024-11-27 06:31:44.095293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.617 [2024-11-27 06:31:44.095300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:58.617 [2024-11-27 06:31:44.095318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.617 [2024-11-27 06:31:44.095326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.617 [2024-11-27 06:31:44.095339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.617 [2024-11-27 06:31:44.095346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.617 [2024-11-27 06:31:44.095360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.617 [2024-11-27 06:31:44.095371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:58.617 [2024-11-27 06:31:44.095385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.617 [2024-11-27 06:31:44.095391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:58.617 [2024-11-27 06:31:44.095406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.617 [2024-11-27 06:31:44.095412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:58.617 [2024-11-27 06:31:44.095426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.617 [2024-11-27 06:31:44.095433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:58.617 [2024-11-27 06:31:44.095447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.617 [2024-11-27 06:31:44.095454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:58.617 [2024-11-27 06:31:44.095468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.617 [2024-11-27 06:31:44.095474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:58.617 [2024-11-27 06:31:44.095488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.617 [2024-11-27 06:31:44.095495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:58.617 [2024-11-27 06:31:44.095509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.617 [2024-11-27 06:31:44.095516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:58.617 [2024-11-27 06:31:44.095530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.617 [2024-11-27 06:31:44.095537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:58.617 [2024-11-27 06:31:44.095551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.617 [2024-11-27 06:31:44.095558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:58.617 [2024-11-27 06:31:44.095572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.617 [2024-11-27 06:31:44.095578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:58.617 [2024-11-27 06:31:44.095592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.617 [2024-11-27 06:31:44.095599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:58.617 [2024-11-27 06:31:44.095612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.617 [2024-11-27 06:31:44.095619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:58.617 [2024-11-27 06:31:44.095636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.617 [2024-11-27 06:31:44.095644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:58.617 [2024-11-27 06:31:44.095659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.617 [2024-11-27 06:31:44.095666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:58.617 [2024-11-27 06:31:44.095680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.617 [2024-11-27 06:31:44.095687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:58.617 [2024-11-27 06:31:44.095701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.617 [2024-11-27 06:31:44.095708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:58.617 [2024-11-27 06:31:44.095721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.617 [2024-11-27 06:31:44.095728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:58.617 [2024-11-27 06:31:44.095742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.617 [2024-11-27 06:31:44.095749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:58.617 [2024-11-27 06:31:44.095763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.095769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.095784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.095790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.095804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.095811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.095825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.095832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.095846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.095852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.095866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.095873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.095889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.095896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.095920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.095927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.095941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.095947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.095961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.095968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.095982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.095989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.096003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.096009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.096024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.096030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.096044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.096051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.096065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.096071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.096084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.096091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.096105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.096111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.096125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.096132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.096146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.096155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.096169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.096176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.096190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.096196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.096210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.096217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.096231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.096237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.096251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.096258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.096272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.096279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.096293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.096299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.096313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.096319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.096334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.096340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.096451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.096460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.096478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.096485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.096502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.096512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.096529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.096536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.096553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.096560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.096577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.096584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.096601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.096607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.096624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.096631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.096648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.096654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.096671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.096678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.096695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.618 [2024-11-27 06:31:44.096701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:58.618 [2024-11-27 06:31:44.096718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.619 [2024-11-27 06:31:44.096725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:58.619 [2024-11-27 06:31:44.096742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.619 [2024-11-27 06:31:44.096749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:58.619 [2024-11-27 06:31:44.096766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.619 [2024-11-27 06:31:44.096772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:58.619 [2024-11-27 06:31:44.096790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.619 [2024-11-27 06:31:44.096796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:58.619 [2024-11-27 06:31:44.096830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.619 [2024-11-27 06:31:44.096838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:58.619 [2024-11-27 06:31:44.096855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.619 [2024-11-27 06:31:44.096862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.619 12045.27 IOPS, 47.05 MiB/s [2024-11-27T06:32:18.149Z] 11521.57 IOPS, 45.01 MiB/s [2024-11-27T06:32:18.149Z] 11041.50 IOPS, 43.13 MiB/s [2024-11-27T06:32:18.149Z] 10599.84 IOPS, 41.41 MiB/s [2024-11-27T06:32:18.149Z] 10192.15 IOPS, 39.81 MiB/s [2024-11-27T06:32:18.149Z] 9814.67 IOPS, 38.34 MiB/s [2024-11-27T06:32:18.149Z] 9464.14 IOPS, 36.97 MiB/s [2024-11-27T06:32:18.149Z] 9484.10 IOPS, 37.05 MiB/s [2024-11-27T06:32:18.149Z] 9594.03 IOPS, 37.48 MiB/s [2024-11-27T06:32:18.149Z] 9703.58 IOPS, 37.90 MiB/s [2024-11-27T06:32:18.149Z] 9800.62 IOPS, 38.28 MiB/s [2024-11-27T06:32:18.149Z] 9889.09 IOPS, 38.63 MiB/s [2024-11-27T06:32:18.149Z] 9979.53 IOPS, 38.98 MiB/s [2024-11-27T06:32:18.149Z] [2024-11-27 06:31:57.166798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.619 [2024-11-27 06:31:57.166829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:58.619 [2024-11-27 06:31:57.166858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.619 [2024-11-27 06:31:57.166867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:58.619 [2024-11-27 06:31:57.166880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.619 [2024-11-27 06:31:57.166887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:58.619 [2024-11-27 06:31:57.166899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.619 [2024-11-27 06:31:57.166914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:58.619 [2024-11-27 06:31:57.166927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:120560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.619 [2024-11-27 06:31:57.166933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:58.619 [2024-11-27 06:31:57.166945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.619 [2024-11-27 06:31:57.166951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:58.619 [2024-11-27 06:31:57.166963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.619 [2024-11-27 06:31:57.166970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:58.619 [2024-11-27 06:31:57.166981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.619 [2024-11-27 06:31:57.166988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:58.619 [2024-11-27 06:31:57.167000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.619 [2024-11-27 06:31:57.167006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:58.619 [2024-11-27 06:31:57.167031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.619 [2024-11-27 06:31:57.167038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:58.619 [2024-11-27 06:31:57.167050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.619 [2024-11-27 06:31:57.167056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:58.619 [2024-11-27 06:31:57.167068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.619 [2024-11-27 06:31:57.167074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:58.619 [2024-11-27 06:31:57.167086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.619 [2024-11-27 06:31:57.167092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:58.619 [2024-11-27 06:31:57.167104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.619 [2024-11-27 06:31:57.167110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:58.619 [2024-11-27 06:31:57.167122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.619 [2024-11-27 06:31:57.167129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.619 [2024-11-27 06:31:57.167140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.619 [2024-11-27 06:31:57.167147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:58.619 [2024-11-27 06:31:57.167307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:120552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.619 [2024-11-27 06:31:57.167316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.619 [2024-11-27 06:31:57.167325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.619 [2024-11-27 06:31:57.167331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.619 [2024-11-27 06:31:57.167339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.619 [2024-11-27 06:31:57.167346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.619 [2024-11-27 06:31:57.167353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.619 [2024-11-27 06:31:57.167360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.619 [2024-11-27 06:31:57.167368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.619 [2024-11-27 06:31:57.167374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.619 [2024-11-27 06:31:57.167387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.619 [2024-11-27 06:31:57.167394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.619 [2024-11-27 06:31:57.167402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.619 [2024-11-27 06:31:57.167408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.619 [2024-11-27 06:31:57.167416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.619 [2024-11-27 06:31:57.167422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.619 [2024-11-27 06:31:57.167430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.620 [2024-11-27 06:31:57.167437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.167445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.620 [2024-11-27 06:31:57.167451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.167459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.620 [2024-11-27 06:31:57.167465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.167473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.620 [2024-11-27 06:31:57.167479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.167487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.620 [2024-11-27 06:31:57.167493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.167501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.620 [2024-11-27 06:31:57.167507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.167516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.620 [2024-11-27 06:31:57.167522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.167530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.620 [2024-11-27 06:31:57.167537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.167544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.620 [2024-11-27 06:31:57.167552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.167560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.620 [2024-11-27 06:31:57.167569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.167577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.620 [2024-11-27 06:31:57.167584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.167593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.620 [2024-11-27 06:31:57.167599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.167607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.620 [2024-11-27 06:31:57.167614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.167621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.620 [2024-11-27 06:31:57.167628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.167636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.620 [2024-11-27 06:31:57.167643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.167650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.620 [2024-11-27 06:31:57.167657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.167665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.620 [2024-11-27 06:31:57.167671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.167679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.620 [2024-11-27 06:31:57.167686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.167693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.620 [2024-11-27 06:31:57.167700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.167707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.620 [2024-11-27 06:31:57.167714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.167721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.620 [2024-11-27 06:31:57.167728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.167736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.620 [2024-11-27 06:31:57.167742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.167750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.620 [2024-11-27 06:31:57.167759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.167767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.620 [2024-11-27 06:31:57.167774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.167782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.620 [2024-11-27 06:31:57.167788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.167796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.620 [2024-11-27 06:31:57.167802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.167810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.620 [2024-11-27 06:31:57.167817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.167825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.620 [2024-11-27 06:31:57.167831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.167839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.620 [2024-11-27 06:31:57.167846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.167853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.620 [2024-11-27 06:31:57.167860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.167867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:121104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.620 [2024-11-27 06:31:57.167874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.167882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.620 [2024-11-27 06:31:57.167888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.167895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.620 [2024-11-27 06:31:57.167911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.167919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.620 [2024-11-27 06:31:57.167926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.167933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.620 [2024-11-27 06:31:57.167940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.167950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.620 [2024-11-27 06:31:57.167957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.167965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.620 [2024-11-27 06:31:57.167971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.167979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.620 [2024-11-27 06:31:57.167986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.167993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.620 [2024-11-27 06:31:57.168000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.620 [2024-11-27 06:31:57.168008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:121176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.621 [2024-11-27 06:31:57.168014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.621 [2024-11-27 06:31:57.168022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.621 [2024-11-27 06:31:57.168029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.621 [2024-11-27 06:31:57.168036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:121192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.621 [2024-11-27 06:31:57.168043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.621 [2024-11-27 06:31:57.168051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.621 [2024-11-27 06:31:57.168057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.621 [2024-11-27 06:31:57.168065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:121208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.621 [2024-11-27 06:31:57.168072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.621 [2024-11-27 06:31:57.168080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.621 [2024-11-27 06:31:57.168086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.621 [2024-11-27 06:31:57.168094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:121224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.621 [2024-11-27 06:31:57.168101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.621 [2024-11-27 06:31:57.168109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:121232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.621 [2024-11-27 06:31:57.168115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.621 [2024-11-27 06:31:57.168122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:121240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.621 [2024-11-27 06:31:57.168131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.621 [2024-11-27 06:31:57.168139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:121248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.621 [2024-11-27 06:31:57.168145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.621 [2024-11-27 06:31:57.168153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:121256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.621 [2024-11-27 06:31:57.168159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.621 [2024-11-27 06:31:57.168168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:121264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.621 [2024-11-27 06:31:57.168174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.621 [2024-11-27 06:31:57.168182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.621 [2024-11-27 06:31:57.168189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.621 [2024-11-27 06:31:57.168197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.621 [2024-11-27 06:31:57.168203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.621 [2024-11-27 06:31:57.168211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:121288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.621 [2024-11-27 06:31:57.168218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.621 [2024-11-27 06:31:57.168226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.621 [2024-11-27 06:31:57.168232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.621 [2024-11-27 06:31:57.168240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:121304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.621 [2024-11-27 06:31:57.168247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.621 [2024-11-27 06:31:57.168255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.621 [2024-11-27 06:31:57.168261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.621 [2024-11-27 06:31:57.168269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:121320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.621 [2024-11-27 06:31:57.168275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.621 [2024-11-27 06:31:57.168283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.621 [2024-11-27 06:31:57.168289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.621 [2024-11-27 06:31:57.168297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:121336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.621 [2024-11-27 06:31:57.168304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.621 [2024-11-27 06:31:57.168314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:121344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.621 [2024-11-27 06:31:57.168321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.621 [2024-11-27 06:31:57.168329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.621 [2024-11-27 06:31:57.168335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.621 [2024-11-27 06:31:57.168343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.621 [2024-11-27 06:31:57.168349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.621 [2024-11-27 06:31:57.168357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.621 [2024-11-27 06:31:57.168364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.621 [2024-11-27 06:31:57.168371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.621 [2024-11-27 06:31:57.168378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.621 [2024-11-27 06:31:57.168386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:121384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.621 [2024-11-27 06:31:57.168392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.621 [2024-11-27 06:31:57.168400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:121392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.621 [2024-11-27 06:31:57.168407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.621 [2024-11-27 06:31:57.168424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:58.621 [2024-11-27 06:31:57.168430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121400 len:8 PRP1 0x0 PRP2 0x0 00:29:58.621 [2024-11-27 06:31:57.168437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.621 [2024-11-27 06:31:57.168449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:58.621 [2024-11-27 06:31:57.168454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:58.621 [2024-11-27 06:31:57.168461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121408 len:8 PRP1 0x0 PRP2 0x0 00:29:58.621 [2024-11-27 06:31:57.168468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.621 [2024-11-27 06:31:57.168475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:58.621 [2024-11-27 06:31:57.168479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:58.621 [2024-11-27 06:31:57.168484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121416 len:8 PRP1 0x0 PRP2 0x0 00:29:58.621 [2024-11-27 06:31:57.168491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.621 [2024-11-27 06:31:57.168497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:58.621 [2024-11-27 06:31:57.168502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:58.621 [2024-11-27 06:31:57.168507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121424 len:8 PRP1 0x0 PRP2 0x0 00:29:58.621 [2024-11-27 06:31:57.168516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.621 [2024-11-27 06:31:57.168523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:58.621 [2024-11-27 06:31:57.168527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:58.621 [2024-11-27 06:31:57.168533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121432 len:8 PRP1 0x0 PRP2 0x0 00:29:58.621 [2024-11-27 06:31:57.168539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.621 [2024-11-27 06:31:57.168546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:58.621 [2024-11-27 06:31:57.168550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:58.621 [2024-11-27 06:31:57.168555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121440 len:8 PRP1 0x0 PRP2 0x0 00:29:58.621 [2024-11-27 06:31:57.168561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.621 [2024-11-27 06:31:57.168568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:58.621 [2024-11-27 06:31:57.168572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:58.621 [2024-11-27 06:31:57.168577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121448 len:8 PRP1 0x0 PRP2 0x0 00:29:58.622 [2024-11-27 06:31:57.168584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.622 [2024-11-27 06:31:57.168590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:58.622 [2024-11-27 06:31:57.168595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:58.622 [2024-11-27 06:31:57.168599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121456 len:8 PRP1 0x0 PRP2 0x0 00:29:58.622 [2024-11-27 06:31:57.168606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.622 [2024-11-27 06:31:57.168612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:58.622 [2024-11-27 06:31:57.168616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:58.622 [2024-11-27 06:31:57.168621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121464 len:8 PRP1 0x0 PRP2 0x0 00:29:58.622 [2024-11-27 06:31:57.168628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.622 [2024-11-27 06:31:57.168636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:58.622 [2024-11-27 06:31:57.168641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:58.622 [2024-11-27 06:31:57.168647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121472 len:8 PRP1 0x0 PRP2 0x0 00:29:58.622 [2024-11-27 06:31:57.168653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.622 [2024-11-27 06:31:57.168660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:58.622 [2024-11-27 06:31:57.168664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:58.622 [2024-11-27 06:31:57.168670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121480 len:8 PRP1 0x0 PRP2 0x0 00:29:58.622 [2024-11-27 06:31:57.168676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.622 [2024-11-27 06:31:57.168683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:58.622 [2024-11-27 06:31:57.168687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:58.622 [2024-11-27 06:31:57.168696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121488 len:8 PRP1 0x0 PRP2 0x0 00:29:58.622 [2024-11-27 06:31:57.168703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.622 [2024-11-27 06:31:57.168710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:58.622 [2024-11-27 06:31:57.168714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:58.622 [2024-11-27 06:31:57.168719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121496 len:8 PRP1 0x0 PRP2 0x0 00:29:58.622 [2024-11-27 06:31:57.168725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.622 [2024-11-27 06:31:57.168732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:58.622 [2024-11-27 06:31:57.168736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:58.622 [2024-11-27 06:31:57.168741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121504 len:8 PRP1 0x0 PRP2 0x0 00:29:58.622 [2024-11-27 06:31:57.168747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.622 [2024-11-27 06:31:57.168754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:58.622 [2024-11-27 06:31:57.168758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:58.622 [2024-11-27 06:31:57.168763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121512 len:8 PRP1 0x0 PRP2 0x0 00:29:58.622 [2024-11-27 06:31:57.168769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.622 [2024-11-27 06:31:57.168776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:58.622 [2024-11-27 06:31:57.168780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:58.622 [2024-11-27 06:31:57.168785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121520 len:8 PRP1 0x0 PRP2 0x0 00:29:58.622 [2024-11-27 06:31:57.168791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.622 [2024-11-27 06:31:57.168798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:58.622 [2024-11-27 06:31:57.168803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:58.622 [2024-11-27 06:31:57.168807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121528 len:8 PRP1 0x0 PRP2 0x0 00:29:58.622 [2024-11-27 06:31:57.168814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.622 [2024-11-27 06:31:57.168823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:58.622 [2024-11-27 06:31:57.168833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:58.622 [2024-11-27 06:31:57.168840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121536 len:8 PRP1 0x0 PRP2 0x0 00:29:58.622 [2024-11-27 06:31:57.168846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.622 [2024-11-27 06:31:57.168853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:58.622 [2024-11-27 06:31:57.168857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:58.622 [2024-11-27 06:31:57.168862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121544 len:8 PRP1 0x0 PRP2 0x0 00:29:58.622 [2024-11-27 06:31:57.168868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.622 [2024-11-27 06:31:57.168878] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:58.622 [2024-11-27 06:31:57.168882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:58.622 [2024-11-27 06:31:57.168887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121552 len:8 PRP1 0x0 PRP2 0x0 00:29:58.622 [2024-11-27 06:31:57.168894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.622 [2024-11-27 06:31:57.168900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:58.622 [2024-11-27 06:31:57.168912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:58.622 [2024-11-27 06:31:57.168917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121560 len:8 PRP1 0x0 PRP2 0x0 00:29:58.622 [2024-11-27 06:31:57.168923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.622 [2024-11-27 06:31:57.168930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:58.622 [2024-11-27 06:31:57.168935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:58.622 [2024-11-27 06:31:57.168940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121568 len:8 PRP1 0x0 PRP2 0x0 00:29:58.622 [2024-11-27 06:31:57.168946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.622 [2024-11-27 06:31:57.168953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:58.622 [2024-11-27 06:31:57.168957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:58.622 [2024-11-27 06:31:57.168962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120672 len:8 PRP1 0x0 PRP2 0x0 00:29:58.622 [2024-11-27 06:31:57.168968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.622 [2024-11-27 06:31:57.168975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:58.622 [2024-11-27 06:31:57.168980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:58.622 [2024-11-27 06:31:57.168985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120680 len:8 PRP1 0x0 PRP2 0x0 00:29:58.622 [2024-11-27 06:31:57.168991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.622 [2024-11-27 06:31:57.168998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:58.622 [2024-11-27 06:31:57.169002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:58.622 [2024-11-27 06:31:57.169007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120688 len:8 PRP1 0x0 PRP2 0x0 00:29:58.622 [2024-11-27 06:31:57.169013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.622 [2024-11-27 06:31:57.169020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:58.622 [2024-11-27 06:31:57.169024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:58.622 [2024-11-27 06:31:57.169030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120696 len:8 PRP1 0x0 PRP2 0x0 00:29:58.622 [2024-11-27 06:31:57.169037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.622 [2024-11-27 06:31:57.169043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:58.622 [2024-11-27 06:31:57.169048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:58.622 [2024-11-27 06:31:57.169052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120704 len:8 PRP1 0x0 PRP2 0x0 00:29:58.622 [2024-11-27 06:31:57.169062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.622 [2024-11-27 06:31:57.169068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:58.622 [2024-11-27 06:31:57.169073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:58.622 [2024-11-27 06:31:57.169078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120712 len:8 PRP1 0x0 PRP2 0x0 00:29:58.622 [2024-11-27 06:31:57.169084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.622 [2024-11-27 06:31:57.169090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:58.622 [2024-11-27 06:31:57.169095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:58.622 [2024-11-27 06:31:57.169099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120720 len:8 PRP1 0x0 PRP2 0x0 00:29:58.622 [2024-11-27 06:31:57.169105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.622 [2024-11-27 06:31:57.169112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:58.622 [2024-11-27 06:31:57.169116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:58.622 [2024-11-27 06:31:57.169121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120728 len:8 PRP1 0x0 PRP2 0x0 00:29:58.622 [2024-11-27 06:31:57.169127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.622 [2024-11-27 06:31:57.169134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:58.623 [2024-11-27 06:31:57.169139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:58.623 [2024-11-27 06:31:57.169143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120736 len:8 PRP1 0x0 PRP2 0x0 00:29:58.623 [2024-11-27 06:31:57.169149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.623 [2024-11-27 06:31:57.169156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:58.623 [2024-11-27 06:31:57.169160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:58.623 [2024-11-27 06:31:57.169165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120744 len:8 PRP1 0x0 PRP2 0x0 00:29:58.623 [2024-11-27 06:31:57.169171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.623 [2024-11-27 06:31:57.169178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:58.623 [2024-11-27 06:31:57.169182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:58.623 [2024-11-27 06:31:57.169187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120752 len:8 PRP1 0x0 PRP2 0x0 00:29:58.623 [2024-11-27 06:31:57.169193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.623 [2024-11-27 06:31:57.169200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:58.623 [2024-11-27 06:31:57.169204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:58.623 [2024-11-27 06:31:57.169210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120760 len:8 PRP1 0x0 PRP2 0x0 00:29:58.623 [2024-11-27 06:31:57.169217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.623 [2024-11-27 06:31:57.169223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:58.623 [2024-11-27 06:31:57.169228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:58.623 [2024-11-27 06:31:57.169236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120768 len:8 PRP1 0x0 PRP2 0x0 00:29:58.623 [2024-11-27 06:31:57.169242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.623 [2024-11-27 06:31:57.176936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:58.623 [2024-11-27 06:31:57.176959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:58.623 [2024-11-27 06:31:57.176967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120776 len:8 PRP1 0x0 PRP2 0x0 00:29:58.623 [2024-11-27 06:31:57.176974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.623 [2024-11-27 06:31:57.176981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:58.623 [2024-11-27 06:31:57.176986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:58.623 [2024-11-27 06:31:57.176991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120784 len:8 PRP1 0x0 PRP2 0x0 00:29:58.623 [2024-11-27 06:31:57.176998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.623 [2024-11-27 06:31:57.177099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:58.623 [2024-11-27 06:31:57.177111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.623 [2024-11-27 06:31:57.177119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:58.623 [2024-11-27 06:31:57.177126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.623 [2024-11-27 06:31:57.177132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:58.623 [2024-11-27 06:31:57.177139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.623 [2024-11-27 06:31:57.177146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:58.623 [2024-11-27 06:31:57.177152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.623 [2024-11-27 06:31:57.177159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.623 [2024-11-27 06:31:57.177166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.623 [2024-11-27 06:31:57.177176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1e700 is same with the state(6) to be set 00:29:58.623 [2024-11-27 06:31:57.178378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.623 [2024-11-27 06:31:57.178414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb1e700 (9): Bad file descriptor 00:29:58.623 [2024-11-27 06:31:57.178575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.623 [2024-11-27 06:31:57.178590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb1e700 with addr=10.0.0.3, port=4421 00:29:58.623 [2024-11-27 06:31:57.178598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1e700 is same with the state(6) to be set 00:29:58.623 [2024-11-27 06:31:57.178609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb1e700 (9): Bad file descriptor 00:29:58.623 [2024-11-27 06:31:57.178619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.623 [2024-11-27 06:31:57.178636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.623 [2024-11-27 06:31:57.178643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.623 [2024-11-27 06:31:57.178649] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.623 [2024-11-27 06:31:57.178656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.623 10058.54 IOPS, 39.29 MiB/s [2024-11-27T06:32:18.153Z] 10128.97 IOPS, 39.57 MiB/s [2024-11-27T06:32:18.153Z] 10202.41 IOPS, 39.85 MiB/s [2024-11-27T06:32:18.153Z] 10270.11 IOPS, 40.12 MiB/s [2024-11-27T06:32:18.153Z] 10339.18 IOPS, 40.39 MiB/s [2024-11-27T06:32:18.153Z] 10403.83 IOPS, 40.64 MiB/s [2024-11-27T06:32:18.153Z] 10464.98 IOPS, 40.88 MiB/s [2024-11-27T06:32:18.153Z] 10519.83 IOPS, 41.09 MiB/s [2024-11-27T06:32:18.153Z] 10573.60 IOPS, 41.30 MiB/s [2024-11-27T06:32:18.153Z] 10625.25 IOPS, 41.50 MiB/s [2024-11-27T06:32:18.153Z] [2024-11-27 06:32:07.229177] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:29:58.623 10671.82 IOPS, 41.69 MiB/s [2024-11-27T06:32:18.153Z] 10715.89 IOPS, 41.86 MiB/s [2024-11-27T06:32:18.153Z] 10759.77 IOPS, 42.03 MiB/s [2024-11-27T06:32:18.153Z] 10803.62 IOPS, 42.20 MiB/s [2024-11-27T06:32:18.153Z] 10841.18 IOPS, 42.35 MiB/s [2024-11-27T06:32:18.153Z] 10880.16 IOPS, 42.50 MiB/s [2024-11-27T06:32:18.153Z] 10917.84 IOPS, 42.65 MiB/s [2024-11-27T06:32:18.153Z] 10953.04 IOPS, 42.79 MiB/s [2024-11-27T06:32:18.153Z] 10986.26 IOPS, 42.92 MiB/s [2024-11-27T06:32:18.153Z] 11018.94 IOPS, 43.04 MiB/s [2024-11-27T06:32:18.153Z] Received shutdown signal, test time was about 54.189412 seconds 00:29:58.623 00:29:58.623 Latency(us) 00:29:58.623 [2024-11-27T06:32:18.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:58.623 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:58.623 Verification LBA range: start 0x0 length 0x4000 00:29:58.623 Nvme0n1 : 54.19 11022.43 43.06 0.00 0.00 11591.31 1380.04 7020619.62 00:29:58.623 [2024-11-27T06:32:18.153Z] =================================================================================================================== 00:29:58.623 [2024-11-27T06:32:18.153Z] Total : 11022.43 43.06 0.00 0.00 11591.31 1380.04 7020619.62 00:29:58.623 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:58.623 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:29:58.623 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:29:58.623 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:29:58.623 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:58.623 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:29:58.623 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:58.623 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:29:58.623 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:58.623 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:58.623 rmmod nvme_tcp 00:29:58.623 rmmod nvme_fabrics 00:29:58.623 rmmod nvme_keyring 00:29:58.623 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:58.623 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:29:58.623 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:29:58.623 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 114789 ']' 00:29:58.623 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 114789 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 114789 ']' 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 114789 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 114789 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:58.624 killing process with pid 114789 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 114789' 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 114789 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 114789 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:29:58.624 00:29:58.624 real 0m59.358s 00:29:58.624 user 2m51.248s 00:29:58.624 sys 0m9.203s 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:58.624 ************************************ 00:29:58.624 END TEST nvmf_host_multipath 00:29:58.624 ************************************ 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.624 ************************************ 00:29:58.624 START TEST nvmf_timeout 00:29:58.624 ************************************ 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:29:58.624 * Looking for test storage... 00:29:58.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:29:58.624 06:32:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:29:58.624 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:29:58.624 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:29:58.624 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:29:58.624 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:29:58.624 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:58.624 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:29:58.624 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:29:58.624 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:58.624 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:58.624 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:29:58.624 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:29:58.624 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:58.624 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:29:58.624 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:29:58.624 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:29:58.624 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:29:58.624 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:58.624 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:29:58.624 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:29:58.624 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:58.624 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:58.624 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:29:58.624 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:58.624 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:58.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.624 --rc genhtml_branch_coverage=1 00:29:58.624 --rc genhtml_function_coverage=1 00:29:58.624 --rc genhtml_legend=1 00:29:58.624 --rc geninfo_all_blocks=1 00:29:58.624 --rc geninfo_unexecuted_blocks=1 00:29:58.624 00:29:58.624 ' 00:29:58.624 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:58.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.624 --rc genhtml_branch_coverage=1 00:29:58.624 --rc genhtml_function_coverage=1 00:29:58.624 --rc genhtml_legend=1 00:29:58.624 --rc geninfo_all_blocks=1 00:29:58.624 --rc geninfo_unexecuted_blocks=1 00:29:58.624 00:29:58.624 ' 00:29:58.624 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:58.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.624 --rc genhtml_branch_coverage=1 00:29:58.624 --rc genhtml_function_coverage=1 00:29:58.624 --rc genhtml_legend=1 00:29:58.624 --rc geninfo_all_blocks=1 00:29:58.624 --rc geninfo_unexecuted_blocks=1 00:29:58.624 00:29:58.624 ' 00:29:58.624 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:58.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.624 --rc genhtml_branch_coverage=1 00:29:58.624 --rc genhtml_function_coverage=1 00:29:58.624 --rc genhtml_legend=1 00:29:58.624 --rc geninfo_all_blocks=1 00:29:58.624 --rc geninfo_unexecuted_blocks=1 00:29:58.624 00:29:58.624 ' 00:29:58.624 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:58.624 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:29:58.624 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:58.624 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:58.624 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:58.624 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:58.624 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:58.624 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:58.624 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:58.624 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:58.624 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:58.625 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:29:58.625 Cannot find device "nvmf_init_br" 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:29:58.625 Cannot find device "nvmf_init_br2" 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:29:58.625 Cannot find device "nvmf_tgt_br" 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:29:58.625 Cannot find device "nvmf_tgt_br2" 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:29:58.625 Cannot find device "nvmf_init_br" 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:29:58.625 Cannot find device "nvmf_init_br2" 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:29:58.625 Cannot find device "nvmf_tgt_br" 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:29:58.625 Cannot find device "nvmf_tgt_br2" 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:29:58.625 Cannot find device "nvmf_br" 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:29:58.625 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:29:58.885 Cannot find device "nvmf_init_if" 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:29:58.885 Cannot find device "nvmf_init_if2" 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:58.885 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:58.885 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:29:58.885 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:58.885 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:29:58.885 00:29:58.885 --- 10.0.0.3 ping statistics --- 00:29:58.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:58.885 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:29:58.885 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:29:58.885 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:29:58.885 00:29:58.885 --- 10.0.0.4 ping statistics --- 00:29:58.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:58.885 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:58.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:58.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:29:58.885 00:29:58.885 --- 10.0.0.1 ping statistics --- 00:29:58.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:58.885 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:29:58.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:58.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:29:58.885 00:29:58.885 --- 10.0.0.2 ping statistics --- 00:29:58.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:58.885 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=116203 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 116203 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 116203 ']' 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:58.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:58.885 06:32:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:58.885 [2024-11-27 06:32:18.383934] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:29:58.886 [2024-11-27 06:32:18.384280] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:59.144 [2024-11-27 06:32:18.499587] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:59.144 [2024-11-27 06:32:18.525523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:59.144 [2024-11-27 06:32:18.543175] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:59.144 [2024-11-27 06:32:18.543211] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:59.144 [2024-11-27 06:32:18.543218] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:59.144 [2024-11-27 06:32:18.543222] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:59.144 [2024-11-27 06:32:18.543227] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:59.144 [2024-11-27 06:32:18.543981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:59.144 [2024-11-27 06:32:18.544148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.078 06:32:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:00.078 06:32:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:30:00.078 06:32:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:00.078 06:32:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:00.078 06:32:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:00.078 06:32:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:00.078 06:32:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:00.078 06:32:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:00.078 [2024-11-27 06:32:19.476691] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:00.078 06:32:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:00.336 Malloc0 00:30:00.337 06:32:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:00.337 06:32:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:00.595 06:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:00.854 [2024-11-27 06:32:20.190610] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:00.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:00.854 06:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=116289 00:30:00.854 06:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:30:00.854 06:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 116289 /var/tmp/bdevperf.sock 00:30:00.854 06:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 116289 ']' 00:30:00.854 06:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:00.854 06:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:00.854 06:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:00.854 06:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:00.854 06:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:00.854 [2024-11-27 06:32:20.248287] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:30:00.854 [2024-11-27 06:32:20.248348] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116289 ] 00:30:00.854 [2024-11-27 06:32:20.364387] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:01.112 [2024-11-27 06:32:20.385113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.112 [2024-11-27 06:32:20.403705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:01.112 06:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:01.112 06:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:30:01.112 06:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:01.371 06:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:30:01.629 NVMe0n1 00:30:01.629 06:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=116323 00:30:01.629 06:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:01.629 06:32:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:30:01.629 Running I/O for 10 seconds... 00:30:02.563 06:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:02.824 12892.00 IOPS, 50.36 MiB/s [2024-11-27T06:32:22.354Z] [2024-11-27 06:32:22.160315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.824 [2024-11-27 06:32:22.160350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.824 [2024-11-27 06:32:22.160355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.824 [2024-11-27 06:32:22.160359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.824 [2024-11-27 06:32:22.160363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.824 [2024-11-27 06:32:22.160367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.824 [2024-11-27 06:32:22.160371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.824 [2024-11-27 06:32:22.160375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.824 [2024-11-27 06:32:22.160379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.824 [2024-11-27 06:32:22.160382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.824 [2024-11-27 06:32:22.160386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.824 [2024-11-27 06:32:22.160390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.824 [2024-11-27 06:32:22.160393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.824 [2024-11-27 06:32:22.160397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.824 [2024-11-27 06:32:22.160400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.824 [2024-11-27 06:32:22.160404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.824 [2024-11-27 06:32:22.160408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.824 [2024-11-27 06:32:22.160412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.160670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe85bb0 is same with the state(6) to be set 00:30:02.825 [2024-11-27 06:32:22.161450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:114640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.825 [2024-11-27 06:32:22.161483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.825 [2024-11-27 06:32:22.161497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:114648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.825 [2024-11-27 06:32:22.161504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.825 [2024-11-27 06:32:22.161512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:114656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.825 [2024-11-27 06:32:22.161517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.825 [2024-11-27 06:32:22.161525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:114664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.825 [2024-11-27 06:32:22.161531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.825 [2024-11-27 06:32:22.161538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:114672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.825 [2024-11-27 06:32:22.161544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.825 [2024-11-27 06:32:22.161551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:114680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.825 [2024-11-27 06:32:22.161556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.161563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:114688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.826 [2024-11-27 06:32:22.161568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.161576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:114696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.826 [2024-11-27 06:32:22.161581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.161588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:114704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.826 [2024-11-27 06:32:22.161594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.161601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:114712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.826 [2024-11-27 06:32:22.161607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.161614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:114720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.826 [2024-11-27 06:32:22.161619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.161628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:114728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.826 [2024-11-27 06:32:22.161633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.161640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:114736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.826 [2024-11-27 06:32:22.161646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.161653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:114744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.826 [2024-11-27 06:32:22.161658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.161665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:114752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.826 [2024-11-27 06:32:22.161670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.161678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:114760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.826 [2024-11-27 06:32:22.161683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.161690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:114768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.826 [2024-11-27 06:32:22.161696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.161704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:114776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.826 [2024-11-27 06:32:22.161709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.161717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:114784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.826 [2024-11-27 06:32:22.161722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.161730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:114792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.826 [2024-11-27 06:32:22.161735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.161742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:114800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.826 [2024-11-27 06:32:22.161747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.161755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:114808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.826 [2024-11-27 06:32:22.161761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.161768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:114816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.826 [2024-11-27 06:32:22.161773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.161780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:114824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.826 [2024-11-27 06:32:22.161785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.161793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:114832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.826 [2024-11-27 06:32:22.161798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.161805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.826 [2024-11-27 06:32:22.161811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.161818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:114848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.826 [2024-11-27 06:32:22.161824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.161831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:114856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.826 [2024-11-27 06:32:22.161837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.161845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.826 [2024-11-27 06:32:22.161850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.161857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:114872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.826 [2024-11-27 06:32:22.161863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.161870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:114880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.826 [2024-11-27 06:32:22.161875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.161883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:114888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.826 [2024-11-27 06:32:22.161888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.161895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:114896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.826 [2024-11-27 06:32:22.161901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.161919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:114904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.826 [2024-11-27 06:32:22.161925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.161932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.826 [2024-11-27 06:32:22.161938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.161945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:114920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.826 [2024-11-27 06:32:22.161951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.161958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:114928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.826 [2024-11-27 06:32:22.161963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.161970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:114936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.826 [2024-11-27 06:32:22.161976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.161983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:114944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.826 [2024-11-27 06:32:22.161988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.161996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:114952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.826 [2024-11-27 06:32:22.162001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.162009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:114960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.826 [2024-11-27 06:32:22.162014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.162022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:114968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.826 [2024-11-27 06:32:22.162027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.162034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:114976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.826 [2024-11-27 06:32:22.162040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.162048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:115072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.826 [2024-11-27 06:32:22.162054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.162061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:115080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.826 [2024-11-27 06:32:22.162066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.826 [2024-11-27 06:32:22.162074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:115088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:115096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:115104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:115112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:115120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:115128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:115136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:115144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:115152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:115160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:115168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:115176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:115184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:115192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:115200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:115208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:115216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:115224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:115232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:115240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:115248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:115256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:115264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:115272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:115280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:115288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:115296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:115312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:115320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:115328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:115336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:115344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:115352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:115360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:115368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:115376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:115384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:115392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:115400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.827 [2024-11-27 06:32:22.162586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.827 [2024-11-27 06:32:22.162594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:115408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.828 [2024-11-27 06:32:22.162599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.162606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:115416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.828 [2024-11-27 06:32:22.162612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.162620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:115424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.828 [2024-11-27 06:32:22.162626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.162633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:115432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.828 [2024-11-27 06:32:22.162638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.162646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:115440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.828 [2024-11-27 06:32:22.162651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.162658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:115448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.828 [2024-11-27 06:32:22.162664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.162672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:115456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.828 [2024-11-27 06:32:22.162677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.162684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:115464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.828 [2024-11-27 06:32:22.162690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.162697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:115472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.828 [2024-11-27 06:32:22.162702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.162709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:115480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.828 [2024-11-27 06:32:22.162716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.162723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:115488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.828 [2024-11-27 06:32:22.162729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.162736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:115496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.828 [2024-11-27 06:32:22.162741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.162748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:114984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.828 [2024-11-27 06:32:22.162753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.162763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:114992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.828 [2024-11-27 06:32:22.162768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.162776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:115000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.828 [2024-11-27 06:32:22.162781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.162790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:115008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.828 [2024-11-27 06:32:22.162795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.162802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:115016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.828 [2024-11-27 06:32:22.162808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.162815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:115024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.828 [2024-11-27 06:32:22.162820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.162828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:115032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.828 [2024-11-27 06:32:22.162833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.162841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.828 [2024-11-27 06:32:22.162846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.162853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:115512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.828 [2024-11-27 06:32:22.162859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.162866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:115520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.828 [2024-11-27 06:32:22.162871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.162878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:115528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.828 [2024-11-27 06:32:22.162884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.162891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:115536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.828 [2024-11-27 06:32:22.162896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.162911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:115544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.828 [2024-11-27 06:32:22.162916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.162924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:115552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.828 [2024-11-27 06:32:22.162930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.162938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:115560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.828 [2024-11-27 06:32:22.162943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.162951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:115568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.828 [2024-11-27 06:32:22.162956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.162964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:115576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.828 [2024-11-27 06:32:22.162969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.162978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:115584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.828 [2024-11-27 06:32:22.162983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.162990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:115592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.828 [2024-11-27 06:32:22.162995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.163003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:115600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.828 [2024-11-27 06:32:22.163008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.163016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:115608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.828 [2024-11-27 06:32:22.163021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.163028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:115616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.828 [2024-11-27 06:32:22.163034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.163041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:115624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.828 [2024-11-27 06:32:22.163046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.163054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:115632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.828 [2024-11-27 06:32:22.163059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.163067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:115640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.828 [2024-11-27 06:32:22.163072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.163079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:115648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.828 [2024-11-27 06:32:22.163085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.163092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:115656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.828 [2024-11-27 06:32:22.163097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.828 [2024-11-27 06:32:22.163105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:115040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.828 [2024-11-27 06:32:22.163110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.829 [2024-11-27 06:32:22.163118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:115048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.829 [2024-11-27 06:32:22.163123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.829 [2024-11-27 06:32:22.163130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:115056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.829 [2024-11-27 06:32:22.163138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.829 [2024-11-27 06:32:22.163159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:02.829 [2024-11-27 06:32:22.163164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:02.829 [2024-11-27 06:32:22.163169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115064 len:8 PRP1 0x0 PRP2 0x0 00:30:02.829 [2024-11-27 06:32:22.163175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:02.829 [2024-11-27 06:32:22.163428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:30:02.829 [2024-11-27 06:32:22.163486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1140f70 (9): Bad file descriptor 00:30:02.829 [2024-11-27 06:32:22.163559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.829 [2024-11-27 06:32:22.163578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1140f70 with addr=10.0.0.3, port=4420 00:30:02.829 [2024-11-27 06:32:22.163584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f70 is same with the state(6) to be set 00:30:02.829 [2024-11-27 06:32:22.163595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1140f70 (9): Bad file descriptor 00:30:02.829 [2024-11-27 06:32:22.163605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:30:02.829 [2024-11-27 06:32:22.163610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:30:02.829 [2024-11-27 06:32:22.163618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:30:02.829 [2024-11-27 06:32:22.163626] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:30:02.829 [2024-11-27 06:32:22.163632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:30:02.829 06:32:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:30:04.699 7165.00 IOPS, 27.99 MiB/s [2024-11-27T06:32:24.229Z] 4776.67 IOPS, 18.66 MiB/s [2024-11-27T06:32:24.229Z] [2024-11-27 06:32:24.163841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.699 [2024-11-27 06:32:24.163873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1140f70 with addr=10.0.0.3, port=4420 00:30:04.699 [2024-11-27 06:32:24.163881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f70 is same with the state(6) to be set 00:30:04.699 [2024-11-27 06:32:24.163894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1140f70 (9): Bad file descriptor 00:30:04.699 [2024-11-27 06:32:24.163909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:30:04.699 [2024-11-27 06:32:24.163914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:30:04.699 [2024-11-27 06:32:24.163919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:30:04.699 [2024-11-27 06:32:24.163925] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:30:04.699 [2024-11-27 06:32:24.163931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:30:04.699 06:32:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:30:04.699 06:32:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:04.699 06:32:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:30:04.957 06:32:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:30:04.957 06:32:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:30:04.957 06:32:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:30:04.957 06:32:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:30:05.216 06:32:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:30:05.216 06:32:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:30:06.716 3582.50 IOPS, 13.99 MiB/s [2024-11-27T06:32:26.246Z] 2866.00 IOPS, 11.20 MiB/s [2024-11-27T06:32:26.246Z] [2024-11-27 06:32:26.164161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.716 [2024-11-27 06:32:26.164200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1140f70 with addr=10.0.0.3, port=4420 00:30:06.716 [2024-11-27 06:32:26.164208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140f70 is same with the state(6) to be set 00:30:06.716 [2024-11-27 06:32:26.164220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1140f70 (9): Bad file descriptor 00:30:06.716 [2024-11-27 06:32:26.164229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:30:06.716 [2024-11-27 06:32:26.164234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:30:06.716 [2024-11-27 06:32:26.164239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:30:06.716 [2024-11-27 06:32:26.164245] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:30:06.716 [2024-11-27 06:32:26.164251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:30:08.615 2388.33 IOPS, 9.33 MiB/s [2024-11-27T06:32:28.406Z] 2047.14 IOPS, 8.00 MiB/s [2024-11-27T06:32:28.406Z] [2024-11-27 06:32:28.164413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:30:08.876 [2024-11-27 06:32:28.164443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:30:08.876 [2024-11-27 06:32:28.164448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:30:08.876 [2024-11-27 06:32:28.164454] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:30:08.876 [2024-11-27 06:32:28.164460] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:30:09.821 1791.25 IOPS, 7.00 MiB/s 00:30:09.821 Latency(us) 00:30:09.821 [2024-11-27T06:32:29.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:09.821 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:09.821 Verification LBA range: start 0x0 length 0x4000 00:30:09.821 NVMe0n1 : 8.11 1766.01 6.90 15.77 0.00 71697.56 2003.89 7020619.62 00:30:09.821 [2024-11-27T06:32:29.351Z] =================================================================================================================== 00:30:09.821 [2024-11-27T06:32:29.351Z] Total : 1766.01 6.90 15.77 0.00 71697.56 2003.89 7020619.62 00:30:09.821 { 00:30:09.821 "results": [ 00:30:09.821 { 00:30:09.821 "job": "NVMe0n1", 00:30:09.821 "core_mask": "0x4", 00:30:09.821 "workload": "verify", 00:30:09.821 "status": "finished", 00:30:09.821 "verify_range": { 00:30:09.821 "start": 0, 00:30:09.821 "length": 16384 00:30:09.821 }, 00:30:09.821 "queue_depth": 128, 00:30:09.821 "io_size": 4096, 00:30:09.821 "runtime": 8.114359, 00:30:09.821 "iops": 1766.00517674902, 00:30:09.821 "mibps": 6.898457721675859, 00:30:09.821 "io_failed": 128, 00:30:09.821 "io_timeout": 0, 00:30:09.821 "avg_latency_us": 71697.55565553273, 00:30:09.821 "min_latency_us": 2003.8892307692308, 00:30:09.821 "max_latency_us": 7020619.618461538 00:30:09.821 } 00:30:09.821 ], 00:30:09.821 "core_count": 1 00:30:09.821 } 00:30:10.088 06:32:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:30:10.088 06:32:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:10.088 06:32:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:30:10.350 06:32:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:30:10.350 06:32:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:30:10.350 06:32:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:30:10.350 06:32:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:30:10.611 06:32:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:30:10.611 06:32:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 116323 00:30:10.611 06:32:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 116289 00:30:10.611 06:32:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 116289 ']' 00:30:10.611 06:32:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 116289 00:30:10.611 06:32:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:30:10.611 06:32:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:10.611 06:32:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116289 00:30:10.611 killing process with pid 116289 00:30:10.611 06:32:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:30:10.611 06:32:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:30:10.611 06:32:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116289' 00:30:10.611 06:32:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 116289 00:30:10.611 Received shutdown signal, test time was about 8.934825 seconds 00:30:10.611 00:30:10.611 Latency(us) 00:30:10.611 [2024-11-27T06:32:30.141Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:10.611 [2024-11-27T06:32:30.141Z] =================================================================================================================== 00:30:10.611 [2024-11-27T06:32:30.141Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:10.611 06:32:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 116289 00:30:10.611 06:32:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:10.873 [2024-11-27 06:32:30.219917] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:10.873 06:32:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=116477 00:30:10.873 06:32:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:30:10.873 06:32:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 116477 /var/tmp/bdevperf.sock 00:30:10.873 06:32:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 116477 ']' 00:30:10.873 06:32:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:10.873 06:32:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:10.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:10.873 06:32:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:10.873 06:32:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:10.873 06:32:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:10.873 [2024-11-27 06:32:30.274716] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:30:10.873 [2024-11-27 06:32:30.274779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116477 ] 00:30:10.873 [2024-11-27 06:32:30.388190] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:11.134 [2024-11-27 06:32:30.408828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:11.134 [2024-11-27 06:32:30.425623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:11.708 06:32:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:11.708 06:32:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:30:11.708 06:32:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:11.970 06:32:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:30:12.229 NVMe0n1 00:30:12.229 06:32:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:12.229 06:32:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=116519 00:30:12.229 06:32:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:30:12.229 Running I/O for 10 seconds... 00:30:13.163 06:32:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:13.538 13030.00 IOPS, 50.90 MiB/s [2024-11-27T06:32:33.068Z] [2024-11-27 06:32:32.793984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.794191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8960 is same with the state(6) to be set 00:30:13.538 [2024-11-27 06:32:32.795347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:118576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.538 [2024-11-27 06:32:32.795376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.538 [2024-11-27 06:32:32.795387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:118584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.538 [2024-11-27 06:32:32.795392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.538 [2024-11-27 06:32:32.795399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:118592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.538 [2024-11-27 06:32:32.795404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.538 [2024-11-27 06:32:32.795409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:118600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.538 [2024-11-27 06:32:32.795414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.538 [2024-11-27 06:32:32.795420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:118608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.538 [2024-11-27 06:32:32.795424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.538 [2024-11-27 06:32:32.795430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:118616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.538 [2024-11-27 06:32:32.795434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.538 [2024-11-27 06:32:32.795440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:118624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:118632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:118640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:118648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:118656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:118664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:118672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:118680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:118688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:118704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:118712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:118720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:118728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:118736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:118744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:118752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:118760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:118768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:118776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:118784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:118792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:118800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:118808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:118816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:118824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:118832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:118840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:118848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:118856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:118864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:118872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:118880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:118888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:118896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:118904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:118912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:118920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:118928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.539 [2024-11-27 06:32:32.795829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:118936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.539 [2024-11-27 06:32:32.795833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.795839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:118944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.540 [2024-11-27 06:32:32.795844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.795849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:118952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.540 [2024-11-27 06:32:32.795857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.795863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:118960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.540 [2024-11-27 06:32:32.795867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.795873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:118968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.540 [2024-11-27 06:32:32.795877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.795883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:118976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.540 [2024-11-27 06:32:32.795887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.795893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:118984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.540 [2024-11-27 06:32:32.795897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.795911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:118992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.540 [2024-11-27 06:32:32.795916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.795921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.540 [2024-11-27 06:32:32.795926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.795931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:119008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.540 [2024-11-27 06:32:32.795936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.795941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:119016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.540 [2024-11-27 06:32:32.795945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.795951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:119024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.540 [2024-11-27 06:32:32.795955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.795961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:119032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.540 [2024-11-27 06:32:32.795966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.795971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:119040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.540 [2024-11-27 06:32:32.795976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.795981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:119048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.540 [2024-11-27 06:32:32.795986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.795991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:119056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.540 [2024-11-27 06:32:32.795995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.796001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:119064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.540 [2024-11-27 06:32:32.796006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.796011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:119072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.540 [2024-11-27 06:32:32.796016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.796021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:119080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.540 [2024-11-27 06:32:32.796027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.796033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:119088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.540 [2024-11-27 06:32:32.796037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.796043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:119096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.540 [2024-11-27 06:32:32.796047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.796052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:119104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.540 [2024-11-27 06:32:32.796056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.796062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:119112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.540 [2024-11-27 06:32:32.796066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.796072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:119120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.540 [2024-11-27 06:32:32.796076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.796082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:119128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.540 [2024-11-27 06:32:32.796088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.796094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:119136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.540 [2024-11-27 06:32:32.796098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.796103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:119144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.540 [2024-11-27 06:32:32.796108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.796113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:119152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.540 [2024-11-27 06:32:32.796117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.796123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:118456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.540 [2024-11-27 06:32:32.796127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.796133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:118464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.540 [2024-11-27 06:32:32.796137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.796143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:118472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.540 [2024-11-27 06:32:32.796147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.796153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:118480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.540 [2024-11-27 06:32:32.796157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.796162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:118488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.540 [2024-11-27 06:32:32.796167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.796172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:118496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.540 [2024-11-27 06:32:32.796176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.796182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:118504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.540 [2024-11-27 06:32:32.796187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.796193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:118512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.540 [2024-11-27 06:32:32.796197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.796203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:118520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.540 [2024-11-27 06:32:32.796208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.796213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:118528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.540 [2024-11-27 06:32:32.796217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.796223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:118536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.540 [2024-11-27 06:32:32.796227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.796233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:118544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.540 [2024-11-27 06:32:32.796237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.540 [2024-11-27 06:32:32.796243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:118552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.541 [2024-11-27 06:32:32.796249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.541 [2024-11-27 06:32:32.796254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:118560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.541 [2024-11-27 06:32:32.796259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.541 [2024-11-27 06:32:32.796264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:118568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.541 [2024-11-27 06:32:32.796268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.541 [2024-11-27 06:32:32.796274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:119160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.541 [2024-11-27 06:32:32.796278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.541 [2024-11-27 06:32:32.796284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:119168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.541 [2024-11-27 06:32:32.796288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.541 [2024-11-27 06:32:32.796293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:119176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.541 [2024-11-27 06:32:32.796297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.541 [2024-11-27 06:32:32.796302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:119184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.541 [2024-11-27 06:32:32.796306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.541 [2024-11-27 06:32:32.796312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:119192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.541 [2024-11-27 06:32:32.796316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.541 [2024-11-27 06:32:32.796321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:119200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.541 [2024-11-27 06:32:32.796326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.541 [2024-11-27 06:32:32.796332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:119208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.541 [2024-11-27 06:32:32.796336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.541 [2024-11-27 06:32:32.796341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:119216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.541 [2024-11-27 06:32:32.796348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.541 [2024-11-27 06:32:32.796353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:119224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.541 [2024-11-27 06:32:32.796358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.541 [2024-11-27 06:32:32.796363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:119232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.541 [2024-11-27 06:32:32.796367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.541 [2024-11-27 06:32:32.796373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:119240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.541 [2024-11-27 06:32:32.796377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.541 [2024-11-27 06:32:32.796383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:119248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.541 [2024-11-27 06:32:32.796387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.541 [2024-11-27 06:32:32.796393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:119256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.541 [2024-11-27 06:32:32.796397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.541 [2024-11-27 06:32:32.796404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:119264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.541 [2024-11-27 06:32:32.796410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.541 [2024-11-27 06:32:32.796415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:119272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.541 [2024-11-27 06:32:32.796419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.541 [2024-11-27 06:32:32.796424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:119280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.541 [2024-11-27 06:32:32.796428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.541 [2024-11-27 06:32:32.796435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:119288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.541 [2024-11-27 06:32:32.796440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.541 [2024-11-27 06:32:32.796445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:119296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.541 [2024-11-27 06:32:32.796449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.541 [2024-11-27 06:32:32.796455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:119304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.541 [2024-11-27 06:32:32.796459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.541 [2024-11-27 06:32:32.796465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:119312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.541 [2024-11-27 06:32:32.796469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.541 [2024-11-27 06:32:32.796475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:119320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.541 [2024-11-27 06:32:32.796479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.541 [2024-11-27 06:32:32.796484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:119328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.541 [2024-11-27 06:32:32.796488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.541 [2024-11-27 06:32:32.796494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:119336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.541 [2024-11-27 06:32:32.796498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.541 [2024-11-27 06:32:32.796504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:119344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.541 [2024-11-27 06:32:32.796509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.541 [2024-11-27 06:32:32.796525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.541 [2024-11-27 06:32:32.796529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119352 len:8 PRP1 0x0 PRP2 0x0 00:30:13.541 [2024-11-27 06:32:32.796534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.541 [2024-11-27 06:32:32.796541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.541 [2024-11-27 06:32:32.796544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.541 [2024-11-27 06:32:32.796548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119360 len:8 PRP1 0x0 PRP2 0x0 00:30:13.541 [2024-11-27 06:32:32.796552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.541 [2024-11-27 06:32:32.796556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.541 [2024-11-27 06:32:32.796560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.541 [2024-11-27 06:32:32.796564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119368 len:8 PRP1 0x0 PRP2 0x0 00:30:13.541 [2024-11-27 06:32:32.796568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.541 [2024-11-27 06:32:32.796573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.541 [2024-11-27 06:32:32.796576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.541 [2024-11-27 06:32:32.796580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119376 len:8 PRP1 0x0 PRP2 0x0 00:30:13.541 [2024-11-27 06:32:32.796584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.541 [2024-11-27 06:32:32.796588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.541 [2024-11-27 06:32:32.796592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.541 [2024-11-27 06:32:32.796595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119384 len:8 PRP1 0x0 PRP2 0x0 00:30:13.541 [2024-11-27 06:32:32.796599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.541 [2024-11-27 06:32:32.796605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.541 [2024-11-27 06:32:32.796608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.541 [2024-11-27 06:32:32.796611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119392 len:8 PRP1 0x0 PRP2 0x0 00:30:13.541 [2024-11-27 06:32:32.796616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.541 [2024-11-27 06:32:32.796620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.541 [2024-11-27 06:32:32.796623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.541 [2024-11-27 06:32:32.796627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119400 len:8 PRP1 0x0 PRP2 0x0 00:30:13.541 [2024-11-27 06:32:32.796631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.541 [2024-11-27 06:32:32.796635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.541 [2024-11-27 06:32:32.796638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.541 [2024-11-27 06:32:32.796642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119408 len:8 PRP1 0x0 PRP2 0x0 00:30:13.541 [2024-11-27 06:32:32.796646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.542 [2024-11-27 06:32:32.796652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.542 [2024-11-27 06:32:32.796656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.542 [2024-11-27 06:32:32.796659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119416 len:8 PRP1 0x0 PRP2 0x0 00:30:13.542 [2024-11-27 06:32:32.796663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.542 [2024-11-27 06:32:32.796668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.542 [2024-11-27 06:32:32.796671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.542 [2024-11-27 06:32:32.796675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119424 len:8 PRP1 0x0 PRP2 0x0 00:30:13.542 [2024-11-27 06:32:32.796679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.542 [2024-11-27 06:32:32.796683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.542 [2024-11-27 06:32:32.796686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.542 [2024-11-27 06:32:32.796690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119432 len:8 PRP1 0x0 PRP2 0x0 00:30:13.542 [2024-11-27 06:32:32.796694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.542 [2024-11-27 06:32:32.796699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.542 [2024-11-27 06:32:32.796702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.542 [2024-11-27 06:32:32.796706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119440 len:8 PRP1 0x0 PRP2 0x0 00:30:13.542 [2024-11-27 06:32:32.796710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.542 [2024-11-27 06:32:32.796715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.542 [2024-11-27 06:32:32.796718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.542 [2024-11-27 06:32:32.796721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119448 len:8 PRP1 0x0 PRP2 0x0 00:30:13.542 [2024-11-27 06:32:32.796726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.542 [2024-11-27 06:32:32.796730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.542 [2024-11-27 06:32:32.796733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.542 [2024-11-27 06:32:32.796737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119456 len:8 PRP1 0x0 PRP2 0x0 00:30:13.542 [2024-11-27 06:32:32.796742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.542 [2024-11-27 06:32:32.804622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.542 [2024-11-27 06:32:32.804648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.542 [2024-11-27 06:32:32.804655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119464 len:8 PRP1 0x0 PRP2 0x0 00:30:13.542 [2024-11-27 06:32:32.804661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.542 [2024-11-27 06:32:32.804666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.542 [2024-11-27 06:32:32.804670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.542 [2024-11-27 06:32:32.804673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119472 len:8 PRP1 0x0 PRP2 0x0 00:30:13.542 [2024-11-27 06:32:32.804678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.542 [2024-11-27 06:32:32.804776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.542 [2024-11-27 06:32:32.804784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.542 [2024-11-27 06:32:32.804789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.542 [2024-11-27 06:32:32.804794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.542 [2024-11-27 06:32:32.804799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.542 [2024-11-27 06:32:32.804803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.542 [2024-11-27 06:32:32.804808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.542 [2024-11-27 06:32:32.804812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.542 [2024-11-27 06:32:32.804816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1839f70 is same with the state(6) to be set 00:30:13.542 [2024-11-27 06:32:32.804996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.542 [2024-11-27 06:32:32.805008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1839f70 (9): Bad file descriptor 00:30:13.542 [2024-11-27 06:32:32.805065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.542 [2024-11-27 06:32:32.805074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1839f70 with addr=10.0.0.3, port=4420 00:30:13.542 [2024-11-27 06:32:32.805079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1839f70 is same with the state(6) to be set 00:30:13.542 [2024-11-27 06:32:32.805087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1839f70 (9): Bad file descriptor 00:30:13.542 [2024-11-27 06:32:32.805095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:13.542 [2024-11-27 06:32:32.805100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:13.542 [2024-11-27 06:32:32.805105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:13.542 [2024-11-27 06:32:32.805110] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:13.542 [2024-11-27 06:32:32.805115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:13.542 06:32:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:30:14.478 7403.50 IOPS, 28.92 MiB/s [2024-11-27T06:32:34.008Z] [2024-11-27 06:32:33.805205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.478 [2024-11-27 06:32:33.805240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1839f70 with addr=10.0.0.3, port=4420 00:30:14.478 [2024-11-27 06:32:33.805247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1839f70 is same with the state(6) to be set 00:30:14.478 [2024-11-27 06:32:33.805258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1839f70 (9): Bad file descriptor 00:30:14.478 [2024-11-27 06:32:33.805268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:14.478 [2024-11-27 06:32:33.805272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:14.478 [2024-11-27 06:32:33.805277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:14.478 [2024-11-27 06:32:33.805283] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:14.478 [2024-11-27 06:32:33.805288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:14.478 06:32:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:14.478 [2024-11-27 06:32:34.000963] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:14.736 06:32:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 116519 00:30:15.304 4935.67 IOPS, 19.28 MiB/s [2024-11-27T06:32:34.834Z] [2024-11-27 06:32:34.820645] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:30:17.186 3701.75 IOPS, 14.46 MiB/s [2024-11-27T06:32:37.675Z] 5199.80 IOPS, 20.31 MiB/s [2024-11-27T06:32:39.066Z] 6491.00 IOPS, 25.36 MiB/s [2024-11-27T06:32:40.011Z] 7416.00 IOPS, 28.97 MiB/s [2024-11-27T06:32:40.954Z] 8133.50 IOPS, 31.77 MiB/s [2024-11-27T06:32:41.899Z] 8659.67 IOPS, 33.83 MiB/s [2024-11-27T06:32:41.899Z] 9090.40 IOPS, 35.51 MiB/s 00:30:22.370 Latency(us) 00:30:22.370 [2024-11-27T06:32:41.900Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:22.370 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:22.370 Verification LBA range: start 0x0 length 0x4000 00:30:22.370 NVMe0n1 : 10.01 9098.06 35.54 0.00 0.00 14043.24 1392.64 3019898.88 00:30:22.370 [2024-11-27T06:32:41.900Z] =================================================================================================================== 00:30:22.370 [2024-11-27T06:32:41.900Z] Total : 9098.06 35.54 0.00 0.00 14043.24 1392.64 3019898.88 00:30:22.370 { 00:30:22.370 "results": [ 00:30:22.370 { 00:30:22.370 "job": "NVMe0n1", 00:30:22.370 "core_mask": "0x4", 00:30:22.370 "workload": "verify", 00:30:22.370 "status": "finished", 00:30:22.370 "verify_range": { 00:30:22.370 "start": 0, 00:30:22.370 "length": 16384 00:30:22.370 }, 00:30:22.370 "queue_depth": 128, 00:30:22.370 "io_size": 4096, 00:30:22.370 "runtime": 10.005648, 00:30:22.370 "iops": 9098.061414912858, 00:30:22.370 "mibps": 35.53930240200335, 00:30:22.370 "io_failed": 0, 00:30:22.370 "io_timeout": 0, 00:30:22.370 "avg_latency_us": 14043.242479331022, 00:30:22.370 "min_latency_us": 1392.64, 00:30:22.370 "max_latency_us": 3019898.88 00:30:22.370 } 00:30:22.370 ], 00:30:22.370 "core_count": 1 00:30:22.370 } 00:30:22.370 06:32:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=116644 00:30:22.370 06:32:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:30:22.370 06:32:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:22.370 Running I/O for 10 seconds... 00:30:23.311 06:32:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:23.577 12907.00 IOPS, 50.42 MiB/s [2024-11-27T06:32:43.108Z] [2024-11-27 06:32:42.883190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.883391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9a10 is same with the state(6) to be set 00:30:23.578 [2024-11-27 06:32:42.884099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:114792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.578 [2024-11-27 06:32:42.884128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.578 [2024-11-27 06:32:42.884139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:114800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.578 [2024-11-27 06:32:42.884144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.578 [2024-11-27 06:32:42.884150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:114808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.578 [2024-11-27 06:32:42.884155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.578 [2024-11-27 06:32:42.884160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.578 [2024-11-27 06:32:42.884165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.578 [2024-11-27 06:32:42.884171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:114824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.578 [2024-11-27 06:32:42.884175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.578 [2024-11-27 06:32:42.884181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:114832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.578 [2024-11-27 06:32:42.884185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.578 [2024-11-27 06:32:42.884190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:114840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.578 [2024-11-27 06:32:42.884194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.579 [2024-11-27 06:32:42.884200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:114848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.579 [2024-11-27 06:32:42.884204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.579 [2024-11-27 06:32:42.884210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:114856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.579 [2024-11-27 06:32:42.884215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.579 [2024-11-27 06:32:42.884220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:114864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.579 [2024-11-27 06:32:42.884224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.579 [2024-11-27 06:32:42.884230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:114872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.579 [2024-11-27 06:32:42.884234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.579 [2024-11-27 06:32:42.884239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:114880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.579 [2024-11-27 06:32:42.884243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.579 [2024-11-27 06:32:42.884249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:114888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.579 [2024-11-27 06:32:42.884253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.579 [2024-11-27 06:32:42.884259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:114896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.579 [2024-11-27 06:32:42.884263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.579 [2024-11-27 06:32:42.884269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:114904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.579 [2024-11-27 06:32:42.884273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.579 [2024-11-27 06:32:42.884278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:114912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.579 [2024-11-27 06:32:42.884282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.579 [2024-11-27 06:32:42.884288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:114920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.579 [2024-11-27 06:32:42.884294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.579 [2024-11-27 06:32:42.884299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:114928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.579 [2024-11-27 06:32:42.884303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.579 [2024-11-27 06:32:42.884309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:114936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.579 [2024-11-27 06:32:42.884313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.579 [2024-11-27 06:32:42.884319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:114944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.579 [2024-11-27 06:32:42.884324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.579 [2024-11-27 06:32:42.884329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:114952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.579 [2024-11-27 06:32:42.884335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.579 [2024-11-27 06:32:42.884341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:115472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.579 [2024-11-27 06:32:42.884345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.579 [2024-11-27 06:32:42.884351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:115480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.579 [2024-11-27 06:32:42.884355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.579 [2024-11-27 06:32:42.884361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:115488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.579 [2024-11-27 06:32:42.884365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.579 [2024-11-27 06:32:42.884371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:115496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.579 [2024-11-27 06:32:42.884375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.579 [2024-11-27 06:32:42.884381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:115504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.579 [2024-11-27 06:32:42.884385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.579 [2024-11-27 06:32:42.884391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:115512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.579 [2024-11-27 06:32:42.884396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.579 [2024-11-27 06:32:42.884401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.579 [2024-11-27 06:32:42.884405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.579 [2024-11-27 06:32:42.884411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:115528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.579 [2024-11-27 06:32:42.884415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.579 [2024-11-27 06:32:42.884421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:115536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.579 [2024-11-27 06:32:42.884425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.579 [2024-11-27 06:32:42.884431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:114960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.579 [2024-11-27 06:32:42.884435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.579 [2024-11-27 06:32:42.884440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.579 [2024-11-27 06:32:42.884445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.579 [2024-11-27 06:32:42.884451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:114976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.579 [2024-11-27 06:32:42.884456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.579 [2024-11-27 06:32:42.884461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.579 [2024-11-27 06:32:42.884465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.579 [2024-11-27 06:32:42.884471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:114992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.579 [2024-11-27 06:32:42.884475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.579 [2024-11-27 06:32:42.884481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:115000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.579 [2024-11-27 06:32:42.884486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.579 [2024-11-27 06:32:42.884491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:115008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.579 [2024-11-27 06:32:42.884496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.579 [2024-11-27 06:32:42.884501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:115016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.579 [2024-11-27 06:32:42.884505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.579 [2024-11-27 06:32:42.884511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:115024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.579 [2024-11-27 06:32:42.884515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.580 [2024-11-27 06:32:42.884520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:115032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.580 [2024-11-27 06:32:42.884525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.580 [2024-11-27 06:32:42.884530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.580 [2024-11-27 06:32:42.884534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.580 [2024-11-27 06:32:42.884540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:115048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.580 [2024-11-27 06:32:42.884545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.580 [2024-11-27 06:32:42.884550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.580 [2024-11-27 06:32:42.884554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.580 [2024-11-27 06:32:42.884560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:115064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.580 [2024-11-27 06:32:42.884564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.580 [2024-11-27 06:32:42.884571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:115072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.580 [2024-11-27 06:32:42.884576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.580 [2024-11-27 06:32:42.884582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:115080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.580 [2024-11-27 06:32:42.884586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.580 [2024-11-27 06:32:42.884591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:115088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.580 [2024-11-27 06:32:42.884596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.580 [2024-11-27 06:32:42.884601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:115096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.580 [2024-11-27 06:32:42.884605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.580 [2024-11-27 06:32:42.884611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:115104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.580 [2024-11-27 06:32:42.884616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.580 [2024-11-27 06:32:42.884621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:115112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.580 [2024-11-27 06:32:42.884626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.580 [2024-11-27 06:32:42.884632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:115120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.580 [2024-11-27 06:32:42.884636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.580 [2024-11-27 06:32:42.884641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:115128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.580 [2024-11-27 06:32:42.884645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.580 [2024-11-27 06:32:42.884651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:115136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.580 [2024-11-27 06:32:42.884655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.580 [2024-11-27 06:32:42.884661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:115144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.580 [2024-11-27 06:32:42.884665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.580 [2024-11-27 06:32:42.884670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:115152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.580 [2024-11-27 06:32:42.884674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.580 [2024-11-27 06:32:42.884680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:115160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.580 [2024-11-27 06:32:42.884684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.580 [2024-11-27 06:32:42.884690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:115168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.580 [2024-11-27 06:32:42.884695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.580 [2024-11-27 06:32:42.884700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:115176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.580 [2024-11-27 06:32:42.884704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.580 [2024-11-27 06:32:42.884710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:115184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.580 [2024-11-27 06:32:42.884714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.580 [2024-11-27 06:32:42.884719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:115192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.580 [2024-11-27 06:32:42.884724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.580 [2024-11-27 06:32:42.884730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:115200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.580 [2024-11-27 06:32:42.884734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.580 [2024-11-27 06:32:42.884740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:115208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.580 [2024-11-27 06:32:42.884744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.580 [2024-11-27 06:32:42.884750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:115216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.580 [2024-11-27 06:32:42.884754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.580 [2024-11-27 06:32:42.884759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:115224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.580 [2024-11-27 06:32:42.884763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.580 [2024-11-27 06:32:42.884770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:115232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.580 [2024-11-27 06:32:42.884774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.580 [2024-11-27 06:32:42.884780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.580 [2024-11-27 06:32:42.884784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.580 [2024-11-27 06:32:42.884790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:115248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.580 [2024-11-27 06:32:42.884794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.580 [2024-11-27 06:32:42.884800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:115256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.580 [2024-11-27 06:32:42.884804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.580 [2024-11-27 06:32:42.884810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:115264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.580 [2024-11-27 06:32:42.884814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.581 [2024-11-27 06:32:42.884819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:115272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-27 06:32:42.884823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.581 [2024-11-27 06:32:42.884829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:115280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-27 06:32:42.884833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.581 [2024-11-27 06:32:42.884839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:115288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-27 06:32:42.884843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.581 [2024-11-27 06:32:42.884849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:115296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-27 06:32:42.884854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.581 [2024-11-27 06:32:42.884859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:115304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-27 06:32:42.884864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.581 [2024-11-27 06:32:42.884874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:115312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-27 06:32:42.884878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.581 [2024-11-27 06:32:42.884883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:115320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-27 06:32:42.884888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.581 [2024-11-27 06:32:42.884893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:115328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-27 06:32:42.884897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.581 [2024-11-27 06:32:42.884911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:115336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-27 06:32:42.884922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.581 [2024-11-27 06:32:42.884928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:115344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-27 06:32:42.884932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.581 [2024-11-27 06:32:42.884938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:115352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-27 06:32:42.884942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.581 [2024-11-27 06:32:42.884947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:115360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-27 06:32:42.884952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.581 [2024-11-27 06:32:42.884957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:115368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-27 06:32:42.884962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.581 [2024-11-27 06:32:42.884968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:115376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-27 06:32:42.884972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.581 [2024-11-27 06:32:42.884978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:115384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-27 06:32:42.884982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.581 [2024-11-27 06:32:42.884988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:115392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-27 06:32:42.884992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.581 [2024-11-27 06:32:42.884997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-27 06:32:42.885001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.581 [2024-11-27 06:32:42.885007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:115408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-27 06:32:42.885011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.581 [2024-11-27 06:32:42.885017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:115416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-27 06:32:42.885021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.581 [2024-11-27 06:32:42.885027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:115424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-27 06:32:42.885031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.581 [2024-11-27 06:32:42.885037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:115432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-27 06:32:42.885041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.581 [2024-11-27 06:32:42.885048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-27 06:32:42.885052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.581 [2024-11-27 06:32:42.885057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:115448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-27 06:32:42.885061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.581 [2024-11-27 06:32:42.885067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:115456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-27 06:32:42.885071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.581 [2024-11-27 06:32:42.885077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:115464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.581 [2024-11-27 06:32:42.885081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.581 [2024-11-27 06:32:42.885086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:115544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.581 [2024-11-27 06:32:42.885090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.581 [2024-11-27 06:32:42.885095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:115552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.581 [2024-11-27 06:32:42.885100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.581 [2024-11-27 06:32:42.885106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:115560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.581 [2024-11-27 06:32:42.885110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.581 [2024-11-27 06:32:42.885115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:115568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.581 [2024-11-27 06:32:42.885119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.581 [2024-11-27 06:32:42.885125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:115576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.581 [2024-11-27 06:32:42.885129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.581 [2024-11-27 06:32:42.885134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:115584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.581 [2024-11-27 06:32:42.885138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.581 [2024-11-27 06:32:42.885145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:115592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.581 [2024-11-27 06:32:42.885149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.581 [2024-11-27 06:32:42.885155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:115600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.581 [2024-11-27 06:32:42.885159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.581 [2024-11-27 06:32:42.885164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:115608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.581 [2024-11-27 06:32:42.885169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.582 [2024-11-27 06:32:42.885174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:115616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.582 [2024-11-27 06:32:42.885178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.582 [2024-11-27 06:32:42.885183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:115624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.582 [2024-11-27 06:32:42.885187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.582 [2024-11-27 06:32:42.885193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:115632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.582 [2024-11-27 06:32:42.885197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.582 [2024-11-27 06:32:42.885204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:115640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.582 [2024-11-27 06:32:42.885208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.582 [2024-11-27 06:32:42.885214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:115648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.582 [2024-11-27 06:32:42.885219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.582 [2024-11-27 06:32:42.885224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:115656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.582 [2024-11-27 06:32:42.885228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.582 [2024-11-27 06:32:42.885233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:115664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.582 [2024-11-27 06:32:42.885237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.582 [2024-11-27 06:32:42.885243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:115672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.582 [2024-11-27 06:32:42.885247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.582 [2024-11-27 06:32:42.885252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:115680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.582 [2024-11-27 06:32:42.885256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.582 [2024-11-27 06:32:42.885262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:115688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.582 [2024-11-27 06:32:42.885266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.582 [2024-11-27 06:32:42.885272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:115696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.582 [2024-11-27 06:32:42.885276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.582 [2024-11-27 06:32:42.885282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:115704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.582 [2024-11-27 06:32:42.885286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.582 [2024-11-27 06:32:42.885292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:115712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.582 [2024-11-27 06:32:42.885296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.582 [2024-11-27 06:32:42.885302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:115720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.582 [2024-11-27 06:32:42.885306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.582 [2024-11-27 06:32:42.885312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:115728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.582 [2024-11-27 06:32:42.885316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.582 [2024-11-27 06:32:42.885321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:115736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.582 [2024-11-27 06:32:42.885325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.582 [2024-11-27 06:32:42.885331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:115744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.582 [2024-11-27 06:32:42.885335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.582 [2024-11-27 06:32:42.885340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:115752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.582 [2024-11-27 06:32:42.885344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.582 [2024-11-27 06:32:42.885350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:115760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.582 [2024-11-27 06:32:42.885354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.582 [2024-11-27 06:32:42.885361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:115768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.582 [2024-11-27 06:32:42.885366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.582 [2024-11-27 06:32:42.885371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:115776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.582 [2024-11-27 06:32:42.885375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.582 [2024-11-27 06:32:42.885381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:115784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.582 [2024-11-27 06:32:42.885385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.582 [2024-11-27 06:32:42.885390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:115792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.582 [2024-11-27 06:32:42.885394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.582 [2024-11-27 06:32:42.885411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:23.582 [2024-11-27 06:32:42.885416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115800 len:8 PRP1 0x0 PRP2 0x0 00:30:23.582 [2024-11-27 06:32:42.885421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.582 [2024-11-27 06:32:42.885428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:23.582 [2024-11-27 06:32:42.885432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:23.582 [2024-11-27 06:32:42.885435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115808 len:8 PRP1 0x0 PRP2 0x0 00:30:23.582 [2024-11-27 06:32:42.885440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.582 [2024-11-27 06:32:42.885627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:30:23.582 [2024-11-27 06:32:42.885676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1839f70 (9): Bad file descriptor 00:30:23.582 [2024-11-27 06:32:42.885732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.582 [2024-11-27 06:32:42.885747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1839f70 with addr=10.0.0.3, port=4420 00:30:23.582 [2024-11-27 06:32:42.885752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1839f70 is same with the state(6) to be set 00:30:23.582 [2024-11-27 06:32:42.885761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1839f70 (9): Bad file descriptor 00:30:23.582 [2024-11-27 06:32:42.885768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:30:23.582 [2024-11-27 06:32:42.885773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:30:23.582 [2024-11-27 06:32:42.885778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:30:23.582 [2024-11-27 06:32:42.885783] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:30:23.582 [2024-11-27 06:32:42.885789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:30:23.583 06:32:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:30:24.527 7174.50 IOPS, 28.03 MiB/s [2024-11-27T06:32:44.057Z] [2024-11-27 06:32:43.885865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.527 [2024-11-27 06:32:43.885941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1839f70 with addr=10.0.0.3, port=4420 00:30:24.527 [2024-11-27 06:32:43.885951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1839f70 is same with the state(6) to be set 00:30:24.527 [2024-11-27 06:32:43.885963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1839f70 (9): Bad file descriptor 00:30:24.527 [2024-11-27 06:32:43.885972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:30:24.527 [2024-11-27 06:32:43.885976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:30:24.527 [2024-11-27 06:32:43.885982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:30:24.527 [2024-11-27 06:32:43.885988] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:30:24.527 [2024-11-27 06:32:43.885994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:30:25.472 4783.00 IOPS, 18.68 MiB/s [2024-11-27T06:32:45.002Z] [2024-11-27 06:32:44.886066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.472 [2024-11-27 06:32:44.886093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1839f70 with addr=10.0.0.3, port=4420 00:30:25.472 [2024-11-27 06:32:44.886099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1839f70 is same with the state(6) to be set 00:30:25.472 [2024-11-27 06:32:44.886110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1839f70 (9): Bad file descriptor 00:30:25.472 [2024-11-27 06:32:44.886119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:30:25.472 [2024-11-27 06:32:44.886123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:30:25.472 [2024-11-27 06:32:44.886128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:30:25.472 [2024-11-27 06:32:44.886135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:30:25.472 [2024-11-27 06:32:44.886142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:30:26.412 3587.25 IOPS, 14.01 MiB/s [2024-11-27T06:32:45.942Z] [2024-11-27 06:32:45.888747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.412 [2024-11-27 06:32:45.888779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1839f70 with addr=10.0.0.3, port=4420 00:30:26.412 [2024-11-27 06:32:45.888785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1839f70 is same with the state(6) to be set 00:30:26.412 [2024-11-27 06:32:45.888969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1839f70 (9): Bad file descriptor 00:30:26.412 [2024-11-27 06:32:45.889135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:30:26.412 [2024-11-27 06:32:45.889140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:30:26.412 [2024-11-27 06:32:45.889145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:30:26.412 [2024-11-27 06:32:45.889151] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:30:26.412 [2024-11-27 06:32:45.889156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:30:26.412 06:32:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:26.673 [2024-11-27 06:32:46.090774] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:26.673 06:32:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 116644 00:30:27.618 2869.80 IOPS, 11.21 MiB/s [2024-11-27T06:32:47.148Z] [2024-11-27 06:32:46.914161] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:30:29.502 4190.83 IOPS, 16.37 MiB/s [2024-11-27T06:32:49.974Z] 5344.57 IOPS, 20.88 MiB/s [2024-11-27T06:32:50.909Z] 6185.38 IOPS, 24.16 MiB/s [2024-11-27T06:32:51.842Z] 6905.56 IOPS, 26.97 MiB/s [2024-11-27T06:32:51.842Z] 7486.50 IOPS, 29.24 MiB/s 00:30:32.312 Latency(us) 00:30:32.312 [2024-11-27T06:32:51.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:32.312 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:32.312 Verification LBA range: start 0x0 length 0x4000 00:30:32.312 NVMe0n1 : 10.01 7493.65 29.27 5057.58 0.00 10177.59 557.69 3019898.88 00:30:32.312 [2024-11-27T06:32:51.842Z] =================================================================================================================== 00:30:32.312 [2024-11-27T06:32:51.842Z] Total : 7493.65 29.27 5057.58 0.00 10177.59 0.00 3019898.88 00:30:32.312 { 00:30:32.312 "results": [ 00:30:32.312 { 00:30:32.312 "job": "NVMe0n1", 00:30:32.312 "core_mask": "0x4", 00:30:32.312 "workload": "verify", 00:30:32.312 "status": "finished", 00:30:32.312 "verify_range": { 00:30:32.312 "start": 0, 00:30:32.312 "length": 16384 00:30:32.312 }, 00:30:32.312 "queue_depth": 128, 00:30:32.312 "io_size": 4096, 00:30:32.312 "runtime": 10.007544, 00:30:32.312 "iops": 7493.646792859467, 00:30:32.312 "mibps": 29.27205778460729, 00:30:32.312 "io_failed": 50614, 00:30:32.312 "io_timeout": 0, 00:30:32.312 "avg_latency_us": 10177.592158202844, 00:30:32.312 "min_latency_us": 557.6861538461538, 00:30:32.312 "max_latency_us": 3019898.88 00:30:32.312 } 00:30:32.312 ], 00:30:32.312 "core_count": 1 00:30:32.312 } 00:30:32.312 06:32:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 116477 00:30:32.312 06:32:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 116477 ']' 00:30:32.312 06:32:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 116477 00:30:32.312 06:32:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:30:32.312 06:32:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:32.312 06:32:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116477 00:30:32.312 killing process with pid 116477 00:30:32.312 Received shutdown signal, test time was about 10.000000 seconds 00:30:32.312 00:30:32.312 Latency(us) 00:30:32.312 [2024-11-27T06:32:51.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:32.312 [2024-11-27T06:32:51.842Z] =================================================================================================================== 00:30:32.312 [2024-11-27T06:32:51.842Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:32.312 06:32:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:30:32.312 06:32:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:30:32.312 06:32:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116477' 00:30:32.312 06:32:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 116477 00:30:32.312 06:32:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 116477 00:30:32.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:32.571 06:32:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=116770 00:30:32.571 06:32:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:30:32.571 06:32:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 116770 /var/tmp/bdevperf.sock 00:30:32.571 06:32:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 116770 ']' 00:30:32.571 06:32:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:32.571 06:32:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:32.571 06:32:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:32.571 06:32:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:32.571 06:32:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:32.571 [2024-11-27 06:32:51.990998] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:30:32.571 [2024-11-27 06:32:51.991054] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116770 ] 00:30:32.834 [2024-11-27 06:32:52.105851] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:32.834 [2024-11-27 06:32:52.129883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.834 [2024-11-27 06:32:52.151321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:33.402 06:32:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:33.402 06:32:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:30:33.402 06:32:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 116770 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:30:33.402 06:32:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=116794 00:30:33.402 06:32:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:30:33.664 06:32:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:30:33.926 NVMe0n1 00:30:33.926 06:32:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=116847 00:30:33.926 06:32:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:33.926 06:32:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:30:33.926 Running I/O for 10 seconds... 00:30:34.864 06:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:35.126 25603.00 IOPS, 100.01 MiB/s [2024-11-27T06:32:54.656Z] [2024-11-27 06:32:54.542634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.126 [2024-11-27 06:32:54.542971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.126 [2024-11-27 06:32:54.543021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.126 [2024-11-27 06:32:54.543047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.126 [2024-11-27 06:32:54.543072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.126 [2024-11-27 06:32:54.543095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.126 [2024-11-27 06:32:54.543120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.126 [2024-11-27 06:32:54.543143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.126 [2024-11-27 06:32:54.543166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.126 [2024-11-27 06:32:54.543189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.126 [2024-11-27 06:32:54.543216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.126 [2024-11-27 06:32:54.543240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.126 [2024-11-27 06:32:54.543271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.126 [2024-11-27 06:32:54.543294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.126 [2024-11-27 06:32:54.543318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.126 [2024-11-27 06:32:54.543341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.126 [2024-11-27 06:32:54.543365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.126 [2024-11-27 06:32:54.543388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.126 [2024-11-27 06:32:54.543411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.126 [2024-11-27 06:32:54.543440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.126 [2024-11-27 06:32:54.543463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.126 [2024-11-27 06:32:54.543486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.126 [2024-11-27 06:32:54.543510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.126 [2024-11-27 06:32:54.543533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.126 [2024-11-27 06:32:54.543563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.126 [2024-11-27 06:32:54.543592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.543618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.543649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.543672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.543695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.543715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.543741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.543768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.543794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.543821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.543844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.543867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.543887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.543924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.543953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.543980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.544986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.545011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.545036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.545057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.545083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.545109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.545132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.545157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.545183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.545724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.545767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.545791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.545814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.545838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.545858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.545885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.545923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.545954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.545977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.546001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.546030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.546054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.546074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.546101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.546128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.546151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.546173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.546195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.546218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.546240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.546267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.546297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.127 [2024-11-27 06:32:54.546323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.128 [2024-11-27 06:32:54.546345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.128 [2024-11-27 06:32:54.546368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.128 [2024-11-27 06:32:54.546391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.128 [2024-11-27 06:32:54.546416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.128 [2024-11-27 06:32:54.546439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.128 [2024-11-27 06:32:54.546466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.128 [2024-11-27 06:32:54.546490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.128 [2024-11-27 06:32:54.546517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.128 [2024-11-27 06:32:54.546540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.128 [2024-11-27 06:32:54.546562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.128 [2024-11-27 06:32:54.546586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.128 [2024-11-27 06:32:54.546610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.128 [2024-11-27 06:32:54.546633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.128 [2024-11-27 06:32:54.546657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc5a40 is same with the state(6) to be set 00:30:35.128 [2024-11-27 06:32:54.546825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:62720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.128 [2024-11-27 06:32:54.546862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.128 [2024-11-27 06:32:54.546878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.128 [2024-11-27 06:32:54.546886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.128 [2024-11-27 06:32:54.546894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.128 [2024-11-27 06:32:54.546900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.128 [2024-11-27 06:32:54.546915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:69272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.128 [2024-11-27 06:32:54.546921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.128 [2024-11-27 06:32:54.546929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:111448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.128 [2024-11-27 06:32:54.546935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.128 [2024-11-27 06:32:54.546943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:48272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.128 [2024-11-27 06:32:54.546949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.128 [2024-11-27 06:32:54.546956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:115352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.128 [2024-11-27 06:32:54.546961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.128 [2024-11-27 06:32:54.546969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.128 [2024-11-27 06:32:54.546974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.128 [2024-11-27 06:32:54.546982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.128 [2024-11-27 06:32:54.546987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.128 [2024-11-27 06:32:54.546995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:50832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.128 [2024-11-27 06:32:54.547000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.128 [2024-11-27 06:32:54.547007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:115016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.128 [2024-11-27 06:32:54.547013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.128 [2024-11-27 06:32:54.547020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:106920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.128 [2024-11-27 06:32:54.547025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.128 [2024-11-27 06:32:54.547033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:28536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.128 [2024-11-27 06:32:54.547038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.128 [2024-11-27 06:32:54.547046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:93096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.128 [2024-11-27 06:32:54.547051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.128 [2024-11-27 06:32:54.547059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:52144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.128 [2024-11-27 06:32:54.547064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.128 [2024-11-27 06:32:54.547072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:79880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.128 [2024-11-27 06:32:54.547078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.128 [2024-11-27 06:32:54.547086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:72024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.128 [2024-11-27 06:32:54.547091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.128 [2024-11-27 06:32:54.547098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.128 [2024-11-27 06:32:54.547104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.128 [2024-11-27 06:32:54.547111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:47944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.128 [2024-11-27 06:32:54.547117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.128 [2024-11-27 06:32:54.547124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:35568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.128 [2024-11-27 06:32:54.547129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.128 [2024-11-27 06:32:54.547139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.128 [2024-11-27 06:32:54.547145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.128 [2024-11-27 06:32:54.547152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.128 [2024-11-27 06:32:54.547158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.128 [2024-11-27 06:32:54.547165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.128 [2024-11-27 06:32:54.547171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.128 [2024-11-27 06:32:54.547178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.128 [2024-11-27 06:32:54.547184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.128 [2024-11-27 06:32:54.547192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.128 [2024-11-27 06:32:54.547197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.128 [2024-11-27 06:32:54.547206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.128 [2024-11-27 06:32:54.547211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.129 [2024-11-27 06:32:54.547218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:101288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.129 [2024-11-27 06:32:54.547224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.129 [2024-11-27 06:32:54.547232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.129 [2024-11-27 06:32:54.547237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.129 [2024-11-27 06:32:54.547245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:84368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.129 [2024-11-27 06:32:54.547250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.129 [2024-11-27 06:32:54.547258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.129 [2024-11-27 06:32:54.547263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.129 [2024-11-27 06:32:54.547271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:76536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.129 [2024-11-27 06:32:54.547276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.129 [2024-11-27 06:32:54.547284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.129 [2024-11-27 06:32:54.547290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.129 [2024-11-27 06:32:54.547297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.129 [2024-11-27 06:32:54.547303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.129 [2024-11-27 06:32:54.547310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.129 [2024-11-27 06:32:54.547316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.129 [2024-11-27 06:32:54.547324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:52352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.129 [2024-11-27 06:32:54.547329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.129 [2024-11-27 06:32:54.547337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.129 [2024-11-27 06:32:54.547342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.129 [2024-11-27 06:32:54.547350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:36784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.129 [2024-11-27 06:32:54.547356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.129 [2024-11-27 06:32:54.547363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.129 [2024-11-27 06:32:54.547369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.129 [2024-11-27 06:32:54.547376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.129 [2024-11-27 06:32:54.547382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.129 [2024-11-27 06:32:54.547389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:115648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.129 [2024-11-27 06:32:54.547395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.129 [2024-11-27 06:32:54.547402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.129 [2024-11-27 06:32:54.547408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.129 [2024-11-27 06:32:54.547415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.129 [2024-11-27 06:32:54.547422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.129 [2024-11-27 06:32:54.547429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:61448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.129 [2024-11-27 06:32:54.547435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.129 [2024-11-27 06:32:54.547442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:44872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.129 [2024-11-27 06:32:54.547447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.129 [2024-11-27 06:32:54.547455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.129 [2024-11-27 06:32:54.547460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.129 [2024-11-27 06:32:54.547468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:104864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.129 [2024-11-27 06:32:54.547473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.129 [2024-11-27 06:32:54.547481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:121984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.129 [2024-11-27 06:32:54.547486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.129 [2024-11-27 06:32:54.547494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.129 [2024-11-27 06:32:54.547500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.129 [2024-11-27 06:32:54.547507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.129 [2024-11-27 06:32:54.547513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.129 [2024-11-27 06:32:54.547521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:55536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.129 [2024-11-27 06:32:54.547526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.129 [2024-11-27 06:32:54.547533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:32840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.129 [2024-11-27 06:32:54.547539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.129 [2024-11-27 06:32:54.547546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.129 [2024-11-27 06:32:54.547552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.129 [2024-11-27 06:32:54.547559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:67288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.129 [2024-11-27 06:32:54.547565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.129 [2024-11-27 06:32:54.547572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:43312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.129 [2024-11-27 06:32:54.547578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.129 [2024-11-27 06:32:54.547585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:54896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.129 [2024-11-27 06:32:54.547591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.129 [2024-11-27 06:32:54.547598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:39632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.129 [2024-11-27 06:32:54.547604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.129 [2024-11-27 06:32:54.547611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.129 [2024-11-27 06:32:54.547618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.129 [2024-11-27 06:32:54.547626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:120832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.129 [2024-11-27 06:32:54.547631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.129 [2024-11-27 06:32:54.547639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:114536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.129 [2024-11-27 06:32:54.547645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.129 [2024-11-27 06:32:54.547652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:50656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.129 [2024-11-27 06:32:54.547657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.129 [2024-11-27 06:32:54.547665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:77848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.129 [2024-11-27 06:32:54.547670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.129 [2024-11-27 06:32:54.547678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.129 [2024-11-27 06:32:54.547683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.130 [2024-11-27 06:32:54.547691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:69968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.130 [2024-11-27 06:32:54.547697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.130 [2024-11-27 06:32:54.547704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:54232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.130 [2024-11-27 06:32:54.547710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.130 [2024-11-27 06:32:54.547718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.130 [2024-11-27 06:32:54.547723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.130 [2024-11-27 06:32:54.547731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:106368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.130 [2024-11-27 06:32:54.547736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.130 [2024-11-27 06:32:54.547744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.130 [2024-11-27 06:32:54.547749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.130 [2024-11-27 06:32:54.547756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:50192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.130 [2024-11-27 06:32:54.547762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.130 [2024-11-27 06:32:54.547770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:88736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.130 [2024-11-27 06:32:54.547775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.130 [2024-11-27 06:32:54.547783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.130 [2024-11-27 06:32:54.547788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.130 [2024-11-27 06:32:54.547795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.130 [2024-11-27 06:32:54.547801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.130 [2024-11-27 06:32:54.547808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:122728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.130 [2024-11-27 06:32:54.547813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.130 [2024-11-27 06:32:54.547821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:73600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.130 [2024-11-27 06:32:54.547827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.130 [2024-11-27 06:32:54.547834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:47760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.130 [2024-11-27 06:32:54.547839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.130 [2024-11-27 06:32:54.547847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.130 [2024-11-27 06:32:54.547852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.130 [2024-11-27 06:32:54.547860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:34888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.130 [2024-11-27 06:32:54.547866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.130 [2024-11-27 06:32:54.547873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:130712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.130 [2024-11-27 06:32:54.547879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.130 [2024-11-27 06:32:54.547886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:54136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.130 [2024-11-27 06:32:54.547891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.130 [2024-11-27 06:32:54.547899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:55912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.130 [2024-11-27 06:32:54.547912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.130 [2024-11-27 06:32:54.547921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.130 [2024-11-27 06:32:54.547926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.130 [2024-11-27 06:32:54.547934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:74928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.130 [2024-11-27 06:32:54.547939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.130 [2024-11-27 06:32:54.547947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.130 [2024-11-27 06:32:54.547952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.130 [2024-11-27 06:32:54.547959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:119648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.130 [2024-11-27 06:32:54.547965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.130 [2024-11-27 06:32:54.547973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:60352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.130 [2024-11-27 06:32:54.547979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.130 [2024-11-27 06:32:54.547987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:117784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.130 [2024-11-27 06:32:54.547992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.130 [2024-11-27 06:32:54.548000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:68920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.130 [2024-11-27 06:32:54.548005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.130 [2024-11-27 06:32:54.548013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:89304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.130 [2024-11-27 06:32:54.548018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.130 [2024-11-27 06:32:54.548025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.130 [2024-11-27 06:32:54.548031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.130 [2024-11-27 06:32:54.548038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.130 [2024-11-27 06:32:54.548044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.130 [2024-11-27 06:32:54.548051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:36824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.130 [2024-11-27 06:32:54.548056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.130 [2024-11-27 06:32:54.548064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:115528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.130 [2024-11-27 06:32:54.548069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.130 [2024-11-27 06:32:54.548077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.130 [2024-11-27 06:32:54.548082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.130 [2024-11-27 06:32:54.548090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:52024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.130 [2024-11-27 06:32:54.548095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.130 [2024-11-27 06:32:54.548102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:44888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.130 [2024-11-27 06:32:54.548108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.130 [2024-11-27 06:32:54.548115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:67776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.130 [2024-11-27 06:32:54.548121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.130 [2024-11-27 06:32:54.548128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.130 [2024-11-27 06:32:54.548134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.130 [2024-11-27 06:32:54.548141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.130 [2024-11-27 06:32:54.548147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.131 [2024-11-27 06:32:54.548154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.131 [2024-11-27 06:32:54.548160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.131 [2024-11-27 06:32:54.548167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:119992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.131 [2024-11-27 06:32:54.548173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.131 [2024-11-27 06:32:54.548180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:52424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.131 [2024-11-27 06:32:54.548186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.131 [2024-11-27 06:32:54.548194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.131 [2024-11-27 06:32:54.548199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.131 [2024-11-27 06:32:54.548207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.131 [2024-11-27 06:32:54.548213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.131 [2024-11-27 06:32:54.548220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.131 [2024-11-27 06:32:54.548225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.131 [2024-11-27 06:32:54.548233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:71008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.131 [2024-11-27 06:32:54.548239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.131 [2024-11-27 06:32:54.548246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.131 [2024-11-27 06:32:54.548252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.131 [2024-11-27 06:32:54.548259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:118992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.131 [2024-11-27 06:32:54.548265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.131 [2024-11-27 06:32:54.548272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:47560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.131 [2024-11-27 06:32:54.548278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.131 [2024-11-27 06:32:54.548285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:87192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.131 [2024-11-27 06:32:54.548291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.131 [2024-11-27 06:32:54.548298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:104200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.131 [2024-11-27 06:32:54.548304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.131 [2024-11-27 06:32:54.548311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:57640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.131 [2024-11-27 06:32:54.548316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.131 [2024-11-27 06:32:54.548324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.131 [2024-11-27 06:32:54.548330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.131 [2024-11-27 06:32:54.548337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:74352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.131 [2024-11-27 06:32:54.548346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.131 [2024-11-27 06:32:54.548353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:120008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.131 [2024-11-27 06:32:54.548358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.131 [2024-11-27 06:32:54.548366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:48256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.131 [2024-11-27 06:32:54.548371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.131 [2024-11-27 06:32:54.548379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.131 [2024-11-27 06:32:54.548385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.131 [2024-11-27 06:32:54.548392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.131 [2024-11-27 06:32:54.548397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.131 [2024-11-27 06:32:54.548405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.131 [2024-11-27 06:32:54.548410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.131 [2024-11-27 06:32:54.548417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.131 [2024-11-27 06:32:54.548423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.131 [2024-11-27 06:32:54.548430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.131 [2024-11-27 06:32:54.548435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.131 [2024-11-27 06:32:54.548443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:47288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.131 [2024-11-27 06:32:54.548448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.131 [2024-11-27 06:32:54.548455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:56 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.131 [2024-11-27 06:32:54.548461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.131 [2024-11-27 06:32:54.548468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.131 [2024-11-27 06:32:54.548473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.131 [2024-11-27 06:32:54.548481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.131 [2024-11-27 06:32:54.548487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.131 [2024-11-27 06:32:54.548494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.131 [2024-11-27 06:32:54.548499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.131 [2024-11-27 06:32:54.548507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.131 [2024-11-27 06:32:54.548512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.131 [2024-11-27 06:32:54.548519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.131 [2024-11-27 06:32:54.548525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.131 [2024-11-27 06:32:54.548533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:55616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:35.131 [2024-11-27 06:32:54.548538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.132 [2024-11-27 06:32:54.548545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3900 is same with the state(6) to be set 00:30:35.132 [2024-11-27 06:32:54.548553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:35.132 [2024-11-27 06:32:54.548557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:35.132 [2024-11-27 06:32:54.548563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38264 len:8 PRP1 0x0 PRP2 0x0 00:30:35.132 [2024-11-27 06:32:54.548568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:35.132 [2024-11-27 06:32:54.548829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:30:35.132 [2024-11-27 06:32:54.548887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f93f70 (9): Bad file descriptor 00:30:35.132 [2024-11-27 06:32:54.550181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f93f70 (9): Bad file descriptor 00:30:35.132 [2024-11-27 06:32:54.550203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:30:35.132 [2024-11-27 06:32:54.550208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:30:35.132 [2024-11-27 06:32:54.550215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:30:35.132 [2024-11-27 06:32:54.550222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:30:35.132 [2024-11-27 06:32:54.550229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:30:35.132 06:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 116847 00:30:37.011 14304.50 IOPS, 55.88 MiB/s [2024-11-27T06:32:56.802Z] 9536.33 IOPS, 37.25 MiB/s [2024-11-27T06:32:56.802Z] [2024-11-27 06:32:56.550326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.272 [2024-11-27 06:32:56.550356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f93f70 with addr=10.0.0.3, port=4420 00:30:37.272 [2024-11-27 06:32:56.550363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f93f70 is same with the state(6) to be set 00:30:37.272 [2024-11-27 06:32:56.550374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f93f70 (9): Bad file descriptor 00:30:37.272 [2024-11-27 06:32:56.550384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:30:37.272 [2024-11-27 06:32:56.550388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:30:37.272 [2024-11-27 06:32:56.550394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:30:37.272 [2024-11-27 06:32:56.550400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:30:37.272 [2024-11-27 06:32:56.550406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:30:39.156 7152.25 IOPS, 27.94 MiB/s [2024-11-27T06:32:58.686Z] 5721.80 IOPS, 22.35 MiB/s [2024-11-27T06:32:58.686Z] [2024-11-27 06:32:58.550500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.156 [2024-11-27 06:32:58.550530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f93f70 with addr=10.0.0.3, port=4420 00:30:39.156 [2024-11-27 06:32:58.550537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f93f70 is same with the state(6) to be set 00:30:39.156 [2024-11-27 06:32:58.550548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f93f70 (9): Bad file descriptor 00:30:39.156 [2024-11-27 06:32:58.550557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:30:39.156 [2024-11-27 06:32:58.550561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:30:39.156 [2024-11-27 06:32:58.550567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:30:39.156 [2024-11-27 06:32:58.550572] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:30:39.156 [2024-11-27 06:32:58.550577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:30:41.034 4768.17 IOPS, 18.63 MiB/s [2024-11-27T06:33:00.564Z] 4087.00 IOPS, 15.96 MiB/s [2024-11-27T06:33:00.564Z] [2024-11-27 06:33:00.550623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:30:41.034 [2024-11-27 06:33:00.550652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:30:41.034 [2024-11-27 06:33:00.550658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:30:41.034 [2024-11-27 06:33:00.550663] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:30:41.034 [2024-11-27 06:33:00.550669] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:30:42.264 3576.12 IOPS, 13.97 MiB/s 00:30:42.264 Latency(us) 00:30:42.264 [2024-11-27T06:33:01.794Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:42.264 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:30:42.264 NVMe0n1 : 8.12 3522.40 13.76 15.76 0.00 36113.31 2659.25 7020619.62 00:30:42.264 [2024-11-27T06:33:01.794Z] =================================================================================================================== 00:30:42.264 [2024-11-27T06:33:01.794Z] Total : 3522.40 13.76 15.76 0.00 36113.31 2659.25 7020619.62 00:30:42.264 { 00:30:42.264 "results": [ 00:30:42.264 { 00:30:42.264 "job": "NVMe0n1", 00:30:42.264 "core_mask": "0x4", 00:30:42.264 "workload": "randread", 00:30:42.264 "status": "finished", 00:30:42.264 "queue_depth": 128, 00:30:42.264 "io_size": 4096, 00:30:42.264 "runtime": 8.122011, 00:30:42.264 "iops": 3522.403503270311, 00:30:42.264 "mibps": 13.759388684649652, 00:30:42.264 "io_failed": 128, 00:30:42.264 "io_timeout": 0, 00:30:42.264 "avg_latency_us": 36113.30820700196, 00:30:42.264 "min_latency_us": 2659.249230769231, 00:30:42.264 "max_latency_us": 7020619.618461538 00:30:42.264 } 00:30:42.264 ], 00:30:42.264 "core_count": 1 00:30:42.264 } 00:30:42.264 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:42.264 Attaching 5 probes... 00:30:42.264 1232.101164: reset bdev controller NVMe0 00:30:42.264 1232.202251: reconnect bdev controller NVMe0 00:30:42.264 3233.543813: reconnect delay bdev controller NVMe0 00:30:42.264 3233.556681: reconnect bdev controller NVMe0 00:30:42.264 5233.717979: reconnect delay bdev controller NVMe0 00:30:42.264 5233.730505: reconnect bdev controller NVMe0 00:30:42.264 7233.889090: reconnect delay bdev controller NVMe0 00:30:42.264 7233.901279: reconnect bdev controller NVMe0 00:30:42.264 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:30:42.264 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:30:42.264 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 116794 00:30:42.264 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:42.264 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 116770 00:30:42.264 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 116770 ']' 00:30:42.264 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 116770 00:30:42.264 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:30:42.264 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:42.264 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116770 00:30:42.264 killing process with pid 116770 00:30:42.264 Received shutdown signal, test time was about 8.177460 seconds 00:30:42.264 00:30:42.264 Latency(us) 00:30:42.264 [2024-11-27T06:33:01.794Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:42.264 [2024-11-27T06:33:01.794Z] =================================================================================================================== 00:30:42.264 [2024-11-27T06:33:01.794Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:42.264 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:30:42.264 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:30:42.264 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116770' 00:30:42.264 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 116770 00:30:42.264 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 116770 00:30:42.264 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:42.522 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:30:42.522 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:30:42.522 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:42.522 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:30:42.522 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:42.522 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:30:42.522 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:42.522 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:42.522 rmmod nvme_tcp 00:30:42.522 rmmod nvme_fabrics 00:30:42.522 rmmod nvme_keyring 00:30:42.522 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:42.522 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:30:42.522 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:30:42.522 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 116203 ']' 00:30:42.522 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 116203 00:30:42.522 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 116203 ']' 00:30:42.522 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 116203 00:30:42.522 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:30:42.522 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:42.522 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116203 00:30:42.522 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:42.522 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:42.522 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116203' 00:30:42.522 killing process with pid 116203 00:30:42.522 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 116203 00:30:42.522 06:33:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 116203 00:30:42.782 06:33:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:42.782 06:33:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:42.782 06:33:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:42.782 06:33:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:30:42.782 06:33:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:42.782 06:33:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:30:42.782 06:33:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:30:42.782 06:33:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:42.782 06:33:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:30:42.782 06:33:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:30:42.782 06:33:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:30:42.782 06:33:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:30:42.782 06:33:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:30:42.782 06:33:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:30:42.782 06:33:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:30:42.782 06:33:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:30:42.782 06:33:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:30:42.782 06:33:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:30:42.782 06:33:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:30:42.782 06:33:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:30:42.782 06:33:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:42.782 06:33:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:42.782 06:33:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:30:42.782 06:33:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:42.782 06:33:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:42.782 06:33:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:43.044 06:33:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:30:43.044 ************************************ 00:30:43.044 END TEST nvmf_timeout 00:30:43.044 ************************************ 00:30:43.044 00:30:43.044 real 0m44.450s 00:30:43.044 user 2m10.816s 00:30:43.044 sys 0m3.472s 00:30:43.044 06:33:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:43.044 06:33:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:43.044 06:33:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:30:43.044 06:33:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:43.044 00:30:43.044 real 6m11.218s 00:30:43.044 user 17m1.476s 00:30:43.044 sys 0m52.407s 00:30:43.044 06:33:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:43.044 06:33:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.044 ************************************ 00:30:43.044 END TEST nvmf_host 00:30:43.044 ************************************ 00:30:43.044 06:33:02 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:30:43.044 06:33:02 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:30:43.044 06:33:02 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:43.044 06:33:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:43.044 06:33:02 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:43.044 06:33:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:43.044 ************************************ 00:30:43.044 START TEST nvmf_target_core_interrupt_mode 00:30:43.044 ************************************ 00:30:43.044 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:43.044 * Looking for test storage... 00:30:43.044 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:30:43.044 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:43.044 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:30:43.044 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:43.044 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:43.044 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:43.044 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:43.044 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:43.044 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:30:43.044 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:30:43.044 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:30:43.044 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:30:43.044 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:30:43.044 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:30:43.044 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:30:43.044 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:43.044 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:30:43.044 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:30:43.044 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:43.044 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:43.044 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:30:43.044 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:30:43.044 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:43.044 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:30:43.044 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:30:43.044 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:30:43.044 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:30:43.044 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:43.044 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:30:43.044 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:30:43.044 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:43.044 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:43.044 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:30:43.044 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:43.044 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:43.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.044 --rc genhtml_branch_coverage=1 00:30:43.044 --rc genhtml_function_coverage=1 00:30:43.044 --rc genhtml_legend=1 00:30:43.044 --rc geninfo_all_blocks=1 00:30:43.044 --rc geninfo_unexecuted_blocks=1 00:30:43.044 00:30:43.044 ' 00:30:43.044 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:43.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.044 --rc genhtml_branch_coverage=1 00:30:43.044 --rc genhtml_function_coverage=1 00:30:43.044 --rc genhtml_legend=1 00:30:43.044 --rc geninfo_all_blocks=1 00:30:43.044 --rc geninfo_unexecuted_blocks=1 00:30:43.044 00:30:43.044 ' 00:30:43.044 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:43.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.044 --rc genhtml_branch_coverage=1 00:30:43.044 --rc genhtml_function_coverage=1 00:30:43.044 --rc genhtml_legend=1 00:30:43.044 --rc geninfo_all_blocks=1 00:30:43.044 --rc geninfo_unexecuted_blocks=1 00:30:43.044 00:30:43.044 ' 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:43.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.045 --rc genhtml_branch_coverage=1 00:30:43.045 --rc genhtml_function_coverage=1 00:30:43.045 --rc genhtml_legend=1 00:30:43.045 --rc geninfo_all_blocks=1 00:30:43.045 --rc geninfo_unexecuted_blocks=1 00:30:43.045 00:30:43.045 ' 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:43.045 ************************************ 00:30:43.045 START TEST nvmf_abort 00:30:43.045 ************************************ 00:30:43.045 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:43.307 * Looking for test storage... 00:30:43.307 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:43.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.307 --rc genhtml_branch_coverage=1 00:30:43.307 --rc genhtml_function_coverage=1 00:30:43.307 --rc genhtml_legend=1 00:30:43.307 --rc geninfo_all_blocks=1 00:30:43.307 --rc geninfo_unexecuted_blocks=1 00:30:43.307 00:30:43.307 ' 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:43.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.307 --rc genhtml_branch_coverage=1 00:30:43.307 --rc genhtml_function_coverage=1 00:30:43.307 --rc genhtml_legend=1 00:30:43.307 --rc geninfo_all_blocks=1 00:30:43.307 --rc geninfo_unexecuted_blocks=1 00:30:43.307 00:30:43.307 ' 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:43.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.307 --rc genhtml_branch_coverage=1 00:30:43.307 --rc genhtml_function_coverage=1 00:30:43.307 --rc genhtml_legend=1 00:30:43.307 --rc geninfo_all_blocks=1 00:30:43.307 --rc geninfo_unexecuted_blocks=1 00:30:43.307 00:30:43.307 ' 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:43.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.307 --rc genhtml_branch_coverage=1 00:30:43.307 --rc genhtml_function_coverage=1 00:30:43.307 --rc genhtml_legend=1 00:30:43.307 --rc geninfo_all_blocks=1 00:30:43.307 --rc geninfo_unexecuted_blocks=1 00:30:43.307 00:30:43.307 ' 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:43.307 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@460 -- # nvmf_veth_init 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:30:43.308 Cannot find device "nvmf_init_br" 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # true 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:30:43.308 Cannot find device "nvmf_init_br2" 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # true 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:30:43.308 Cannot find device "nvmf_tgt_br" 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # true 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:30:43.308 Cannot find device "nvmf_tgt_br2" 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # true 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:30:43.308 Cannot find device "nvmf_init_br" 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # true 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:30:43.308 Cannot find device "nvmf_init_br2" 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # true 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:30:43.308 Cannot find device "nvmf_tgt_br" 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # true 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:30:43.308 Cannot find device "nvmf_tgt_br2" 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # true 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:30:43.308 Cannot find device "nvmf_br" 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # true 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:30:43.308 Cannot find device "nvmf_init_if" 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # true 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:30:43.308 Cannot find device "nvmf_init_if2" 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # true 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:43.308 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # true 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:43.308 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # true 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:30:43.308 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:30:43.568 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:43.568 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:30:43.568 00:30:43.568 --- 10.0.0.3 ping statistics --- 00:30:43.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:43.568 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:30:43.568 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:30:43.568 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms 00:30:43.568 00:30:43.568 --- 10.0.0.4 ping statistics --- 00:30:43.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:43.568 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:43.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:43.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:30:43.568 00:30:43.568 --- 10.0.0.1 ping statistics --- 00:30:43.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:43.568 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:30:43.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:43.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.032 ms 00:30:43.568 00:30:43.568 --- 10.0.0.2 ping statistics --- 00:30:43.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:43.568 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@461 -- # return 0 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=117264 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 117264 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 117264 ']' 00:30:43.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:43.568 06:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:43.568 [2024-11-27 06:33:03.028395] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:43.568 [2024-11-27 06:33:03.029175] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:30:43.568 [2024-11-27 06:33:03.029219] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:43.826 [2024-11-27 06:33:03.144900] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:43.826 [2024-11-27 06:33:03.170684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:43.826 [2024-11-27 06:33:03.189617] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:43.826 [2024-11-27 06:33:03.189655] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:43.826 [2024-11-27 06:33:03.189661] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:43.826 [2024-11-27 06:33:03.189666] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:43.826 [2024-11-27 06:33:03.189671] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:43.826 [2024-11-27 06:33:03.190341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:43.826 [2024-11-27 06:33:03.190683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:43.826 [2024-11-27 06:33:03.190693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:43.826 [2024-11-27 06:33:03.239433] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:43.826 [2024-11-27 06:33:03.240028] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:43.826 [2024-11-27 06:33:03.240028] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:43.826 [2024-11-27 06:33:03.241122] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:44.393 06:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:44.393 06:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:30:44.393 06:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:44.393 06:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:44.393 06:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:44.651 06:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:44.651 06:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:30:44.651 06:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.651 06:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:44.651 [2024-11-27 06:33:03.959379] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:44.651 06:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.651 06:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:30:44.651 06:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.651 06:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:44.651 Malloc0 00:30:44.651 06:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.651 06:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:44.651 06:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.651 06:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:44.651 Delay0 00:30:44.651 06:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.651 06:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:44.651 06:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.651 06:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:44.651 06:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.651 06:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:30:44.651 06:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.651 06:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:44.651 06:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.651 06:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:30:44.651 06:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.651 06:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:44.651 [2024-11-27 06:33:04.031498] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:44.651 06:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.651 06:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:30:44.651 06:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.651 06:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:44.651 06:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.651 06:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:30:44.909 [2024-11-27 06:33:04.206595] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:46.818 Initializing NVMe Controllers 00:30:46.818 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:30:46.818 controller IO queue size 128 less than required 00:30:46.818 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:30:46.818 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:30:46.818 Initialization complete. Launching workers. 00:30:46.818 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 31903 00:30:46.818 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 31960, failed to submit 66 00:30:46.818 success 31903, unsuccessful 57, failed 0 00:30:46.818 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:46.818 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.818 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:46.818 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.818 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:30:46.818 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:30:46.818 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:46.818 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:30:46.818 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:46.818 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:30:46.818 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:46.818 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:46.818 rmmod nvme_tcp 00:30:47.077 rmmod nvme_fabrics 00:30:47.077 rmmod nvme_keyring 00:30:47.077 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:47.077 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:30:47.077 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:30:47.077 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 117264 ']' 00:30:47.077 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 117264 00:30:47.077 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 117264 ']' 00:30:47.077 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 117264 00:30:47.077 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:30:47.077 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:47.077 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 117264 00:30:47.077 killing process with pid 117264 00:30:47.077 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:47.077 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:47.077 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 117264' 00:30:47.077 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 117264 00:30:47.077 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 117264 00:30:47.077 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:47.077 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:47.077 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:47.077 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:30:47.077 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:30:47.077 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:30:47.077 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:47.077 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:47.077 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:30:47.077 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:30:47.077 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:30:47.077 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:30:47.335 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:30:47.335 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:30:47.335 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:30:47.335 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:30:47.335 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:30:47.335 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:30:47.335 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:30:47.335 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:30:47.335 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:47.335 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:47.335 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:30:47.335 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:47.335 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:47.335 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:47.335 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:30:47.335 00:30:47.335 real 0m4.222s 00:30:47.335 user 0m8.939s 00:30:47.335 sys 0m1.204s 00:30:47.335 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:47.335 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:47.335 ************************************ 00:30:47.335 END TEST nvmf_abort 00:30:47.335 ************************************ 00:30:47.335 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:47.335 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:47.335 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:47.335 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:47.335 ************************************ 00:30:47.335 START TEST nvmf_ns_hotplug_stress 00:30:47.335 ************************************ 00:30:47.335 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:47.594 * Looking for test storage... 00:30:47.594 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:47.594 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:47.594 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:47.594 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:30:47.594 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:47.594 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:47.594 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:47.594 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:47.594 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:30:47.594 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:30:47.594 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:30:47.594 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:30:47.594 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:30:47.594 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:30:47.594 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:30:47.594 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:47.594 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:30:47.594 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:47.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:47.595 --rc genhtml_branch_coverage=1 00:30:47.595 --rc genhtml_function_coverage=1 00:30:47.595 --rc genhtml_legend=1 00:30:47.595 --rc geninfo_all_blocks=1 00:30:47.595 --rc geninfo_unexecuted_blocks=1 00:30:47.595 00:30:47.595 ' 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:47.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:47.595 --rc genhtml_branch_coverage=1 00:30:47.595 --rc genhtml_function_coverage=1 00:30:47.595 --rc genhtml_legend=1 00:30:47.595 --rc geninfo_all_blocks=1 00:30:47.595 --rc geninfo_unexecuted_blocks=1 00:30:47.595 00:30:47.595 ' 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:47.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:47.595 --rc genhtml_branch_coverage=1 00:30:47.595 --rc genhtml_function_coverage=1 00:30:47.595 --rc genhtml_legend=1 00:30:47.595 --rc geninfo_all_blocks=1 00:30:47.595 --rc geninfo_unexecuted_blocks=1 00:30:47.595 00:30:47.595 ' 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:47.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:47.595 --rc genhtml_branch_coverage=1 00:30:47.595 --rc genhtml_function_coverage=1 00:30:47.595 --rc genhtml_legend=1 00:30:47.595 --rc geninfo_all_blocks=1 00:30:47.595 --rc geninfo_unexecuted_blocks=1 00:30:47.595 00:30:47.595 ' 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:47.595 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:47.596 06:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:30:47.596 Cannot find device "nvmf_init_br" 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:30:47.596 Cannot find device "nvmf_init_br2" 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:30:47.596 Cannot find device "nvmf_tgt_br" 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:30:47.596 Cannot find device "nvmf_tgt_br2" 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:30:47.596 Cannot find device "nvmf_init_br" 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:30:47.596 Cannot find device "nvmf_init_br2" 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:30:47.596 Cannot find device "nvmf_tgt_br" 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:30:47.596 Cannot find device "nvmf_tgt_br2" 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:30:47.596 Cannot find device "nvmf_br" 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:30:47.596 Cannot find device "nvmf_init_if" 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:30:47.596 Cannot find device "nvmf_init_if2" 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:47.596 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:47.596 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:30:47.856 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:47.856 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:47.856 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:30:47.856 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:30:47.856 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:47.856 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:30:47.856 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:47.856 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:47.856 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:47.856 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:47.856 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:47.856 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:30:47.856 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:30:47.856 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:30:47.856 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:30:47.856 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:30:47.856 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:30:47.856 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:30:47.856 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:30:47.856 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:30:47.856 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:47.856 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:47.856 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:47.856 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:30:47.856 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:30:47.856 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:30:47.856 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:30:47.856 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:47.856 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:47.856 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:47.856 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:30:47.856 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:30:47.856 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:30:47.856 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:47.856 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:30:47.856 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:30:47.856 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:47.856 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:30:47.856 00:30:47.856 --- 10.0.0.3 ping statistics --- 00:30:47.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:47.856 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:30:47.856 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:30:47.857 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:30:47.857 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:30:47.857 00:30:47.857 --- 10.0.0.4 ping statistics --- 00:30:47.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:47.857 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:30:47.857 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:47.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:47.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:30:47.857 00:30:47.857 --- 10.0.0.1 ping statistics --- 00:30:47.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:47.857 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:30:47.857 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:30:47.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:47.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.035 ms 00:30:47.857 00:30:47.857 --- 10.0.0.2 ping statistics --- 00:30:47.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:47.857 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:30:47.857 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:47.857 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@461 -- # return 0 00:30:47.857 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:47.857 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:47.857 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:47.857 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:47.857 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:47.857 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:47.857 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:47.857 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:30:47.857 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:47.857 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:47.857 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:48.114 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=117573 00:30:48.114 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 117573 00:30:48.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:48.114 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 117573 ']' 00:30:48.114 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:48.114 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:48.114 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:48.114 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:48.114 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:48.114 06:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:48.114 [2024-11-27 06:33:07.424551] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:48.114 [2024-11-27 06:33:07.425308] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:30:48.114 [2024-11-27 06:33:07.425345] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:48.114 [2024-11-27 06:33:07.541186] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:48.114 [2024-11-27 06:33:07.570246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:48.114 [2024-11-27 06:33:07.589121] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:48.114 [2024-11-27 06:33:07.589153] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:48.114 [2024-11-27 06:33:07.589159] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:48.114 [2024-11-27 06:33:07.589164] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:48.114 [2024-11-27 06:33:07.589169] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:48.114 [2024-11-27 06:33:07.589977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:48.114 [2024-11-27 06:33:07.590459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:48.114 [2024-11-27 06:33:07.590471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:48.114 [2024-11-27 06:33:07.640147] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:48.114 [2024-11-27 06:33:07.641096] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:48.114 [2024-11-27 06:33:07.641319] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:48.114 [2024-11-27 06:33:07.641401] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:49.049 06:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:49.049 06:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:30:49.050 06:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:49.050 06:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:49.050 06:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:49.050 06:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:49.050 06:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:30:49.050 06:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:49.050 [2024-11-27 06:33:08.527309] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:49.050 06:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:49.311 06:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:49.571 [2024-11-27 06:33:08.959966] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:49.572 06:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:30:49.832 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:30:50.091 Malloc0 00:30:50.091 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:50.352 Delay0 00:30:50.352 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:50.352 06:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:30:50.613 NULL1 00:30:50.613 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:30:50.875 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=117699 00:30:50.875 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:30:50.875 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 117699 00:30:50.875 06:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:52.261 Read completed with error (sct=0, sc=11) 00:30:52.261 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:52.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:52.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:52.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:52.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:52.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:52.261 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:30:52.261 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:30:52.522 true 00:30:52.522 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 117699 00:30:52.522 06:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:53.464 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:53.464 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:30:53.464 06:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:30:53.725 true 00:30:53.725 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 117699 00:30:53.725 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:53.986 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:53.986 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:30:53.986 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:30:54.261 true 00:30:54.261 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 117699 00:30:54.261 06:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:55.212 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:55.471 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:30:55.471 06:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:30:55.729 true 00:30:55.729 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 117699 00:30:55.729 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:55.987 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:55.987 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:30:55.987 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:30:56.244 true 00:30:56.244 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 117699 00:30:56.244 06:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:57.180 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:57.441 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:57.441 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:30:57.441 06:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:30:57.698 true 00:30:57.698 06:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 117699 00:30:57.699 06:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:57.956 06:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:58.214 06:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:30:58.214 06:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:30:58.472 true 00:30:58.472 06:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 117699 00:30:58.472 06:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:59.406 06:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:59.663 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:30:59.663 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:30:59.663 true 00:30:59.923 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 117699 00:30:59.923 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:59.923 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:00.180 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:31:00.180 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:31:00.438 true 00:31:00.438 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 117699 00:31:00.438 06:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:01.374 06:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:01.632 06:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:31:01.632 06:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:31:01.632 true 00:31:01.632 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 117699 00:31:01.632 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:01.890 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:02.147 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:31:02.147 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:31:02.404 true 00:31:02.404 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 117699 00:31:02.404 06:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:03.336 06:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:03.593 06:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:31:03.593 06:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:31:03.850 true 00:31:03.850 06:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 117699 00:31:03.851 06:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:03.851 06:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:04.108 06:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:31:04.108 06:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:31:04.365 true 00:31:04.365 06:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 117699 00:31:04.365 06:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.299 06:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:05.557 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:31:05.558 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:31:05.816 true 00:31:05.816 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 117699 00:31:05.816 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:06.074 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:06.331 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:31:06.332 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:31:06.332 true 00:31:06.332 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 117699 00:31:06.332 06:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:07.266 06:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:07.524 06:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:31:07.524 06:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:31:07.782 true 00:31:07.782 06:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 117699 00:31:07.782 06:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:08.040 06:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:08.040 06:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:31:08.040 06:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:31:08.297 true 00:31:08.298 06:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 117699 00:31:08.298 06:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:09.668 06:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:09.668 06:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:31:09.668 06:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:31:09.668 true 00:31:09.668 06:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 117699 00:31:09.668 06:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:09.925 06:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:10.181 06:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:31:10.181 06:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:31:10.438 true 00:31:10.438 06:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 117699 00:31:10.438 06:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:11.387 06:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:11.657 06:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:31:11.657 06:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:31:11.657 true 00:31:11.657 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 117699 00:31:11.657 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:11.916 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:12.173 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:31:12.173 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:31:12.431 true 00:31:12.431 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 117699 00:31:12.431 06:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:13.364 06:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:13.622 06:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:31:13.622 06:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:31:13.881 true 00:31:13.881 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 117699 00:31:13.881 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:13.881 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:14.139 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:31:14.139 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:31:14.398 true 00:31:14.398 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 117699 00:31:14.398 06:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:15.333 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:15.333 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:15.590 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:31:15.590 06:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:31:15.848 true 00:31:15.848 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 117699 00:31:15.848 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:16.106 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:16.106 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:31:16.106 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:31:16.363 true 00:31:16.363 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 117699 00:31:16.363 06:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:17.296 06:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:17.555 06:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:31:17.555 06:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:31:17.813 true 00:31:17.813 06:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 117699 00:31:17.813 06:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:18.071 06:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:18.328 06:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:31:18.328 06:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:31:18.328 true 00:31:18.328 06:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 117699 00:31:18.328 06:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:19.701 06:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:19.701 06:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:31:19.701 06:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:31:19.701 true 00:31:19.959 06:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 117699 00:31:19.959 06:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:19.959 06:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:20.216 06:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:31:20.216 06:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:31:20.474 true 00:31:20.474 06:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 117699 00:31:20.474 06:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:21.408 Initializing NVMe Controllers 00:31:21.408 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:31:21.408 Controller IO queue size 128, less than required. 00:31:21.408 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:21.408 Controller IO queue size 128, less than required. 00:31:21.408 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:21.408 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:21.408 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:21.408 Initialization complete. Launching workers. 00:31:21.408 ======================================================== 00:31:21.408 Latency(us) 00:31:21.408 Device Information : IOPS MiB/s Average min max 00:31:21.408 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 312.46 0.15 231970.20 3618.16 1038013.19 00:31:21.408 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 14597.68 7.13 8768.95 2073.75 416203.68 00:31:21.408 ======================================================== 00:31:21.408 Total : 14910.15 7.28 13446.47 2073.75 1038013.19 00:31:21.408 00:31:21.408 06:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:21.665 06:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:31:21.665 06:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:31:21.922 true 00:31:21.922 06:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 117699 00:31:21.922 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (117699) - No such process 00:31:21.922 06:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 117699 00:31:21.922 06:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:21.922 06:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:22.180 06:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:31:22.180 06:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:31:22.180 06:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:31:22.180 06:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:22.180 06:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:31:22.437 null0 00:31:22.437 06:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:22.437 06:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:22.437 06:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:31:22.694 null1 00:31:22.694 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:22.694 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:22.694 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:31:22.694 null2 00:31:22.952 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:22.952 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:22.952 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:31:22.952 null3 00:31:22.952 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:22.952 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:22.952 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:31:23.209 null4 00:31:23.209 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:23.209 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:23.209 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:31:23.467 null5 00:31:23.467 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:23.467 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:23.467 06:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:31:23.725 null6 00:31:23.725 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:23.725 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:23.725 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:31:23.983 null7 00:31:23.983 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:23.983 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:23.983 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:31:23.983 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:23.983 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:23.983 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:23.983 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:23.983 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:31:23.983 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:31:23.983 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:23.983 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.983 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:23.983 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:23.983 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 118732 118733 118736 118737 118739 118741 118743 118745 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:23.984 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:24.251 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.251 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.251 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:24.251 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.251 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.251 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:24.251 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.251 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.251 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:24.251 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.251 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.251 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:24.251 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.251 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.251 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:24.251 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.251 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.251 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:24.509 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.509 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.509 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:24.509 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.509 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.509 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:24.509 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:24.509 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:24.509 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:24.509 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:24.509 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:24.509 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:24.509 06:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:24.509 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:24.767 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.767 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.768 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:24.768 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.768 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.768 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:24.768 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.768 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.768 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:24.768 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.768 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.768 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:24.768 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.768 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.768 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:24.768 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.768 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.768 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:24.768 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.768 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.768 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:24.768 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.768 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.768 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:25.026 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:25.026 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:25.026 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:25.026 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:25.026 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:25.026 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:25.026 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:25.026 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:25.026 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.026 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.026 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:25.284 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.284 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.284 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:25.284 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.284 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.284 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:25.284 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.284 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.284 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:25.284 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.284 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.284 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:25.284 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.284 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.284 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:25.284 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.284 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.285 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:25.285 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.285 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.285 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:25.285 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:25.285 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:25.542 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:25.542 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:25.542 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:25.542 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:25.542 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:25.542 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.542 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.542 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:25.542 06:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:25.542 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.542 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.542 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:25.543 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.543 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.543 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:25.800 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.800 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.800 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:25.800 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.800 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.800 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:25.800 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.800 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.800 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:25.800 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.800 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.800 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:25.800 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:25.800 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:25.800 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:25.800 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:25.800 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:25.800 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:25.800 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:26.057 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:26.057 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:26.057 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:26.057 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.057 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.057 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:26.057 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.057 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.057 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:26.057 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.057 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.058 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:26.058 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:26.058 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.058 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.058 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:26.058 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.058 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.058 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:26.058 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.058 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.058 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:26.315 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.315 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.315 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:26.316 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:26.316 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:26.316 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:26.316 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.316 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.316 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:26.316 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:26.316 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:26.316 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:26.316 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:26.573 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.573 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.573 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:26.573 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.573 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.573 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:26.573 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.573 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.573 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:26.573 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:26.573 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.573 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.573 06:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:26.573 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.573 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.573 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:26.573 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.573 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.573 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:26.831 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:26.831 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.831 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.831 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:26.831 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:26.831 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:26.831 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.831 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.831 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:26.832 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:26.832 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:26.832 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:26.832 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:26.832 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.832 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.832 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:26.832 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.832 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.832 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:27.090 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.090 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.090 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:27.090 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.090 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.090 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:27.090 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:27.090 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:27.090 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.090 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.090 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:27.090 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.090 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.090 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:27.090 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:27.090 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.090 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.090 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:27.374 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:27.374 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.374 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.374 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:27.374 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:27.374 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:27.374 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:27.374 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.374 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.374 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:27.374 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.374 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.374 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:27.374 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:27.374 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:27.374 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.374 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.374 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:27.374 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.374 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.374 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:27.646 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.646 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.646 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:27.646 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.646 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.646 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:27.646 06:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:27.646 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.646 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.646 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:27.646 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:27.646 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.646 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.646 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:27.646 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:27.646 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:27.646 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:27.646 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:27.646 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:27.646 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.646 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.646 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:27.904 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:27.904 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.904 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.904 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:27.904 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.904 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.904 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:27.904 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.904 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.904 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:27.904 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.904 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.904 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:27.904 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.905 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.905 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:27.905 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.905 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.905 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:28.190 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:28.190 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.190 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.190 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:28.190 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:28.190 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:28.190 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:28.190 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:28.190 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.190 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.190 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.190 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.190 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:28.190 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.190 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.190 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.190 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.447 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.447 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.447 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:28.447 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.447 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.447 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.447 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.447 06:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:28.704 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.704 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.704 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:31:28.704 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:31:28.704 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:28.704 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:31:28.704 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:28.704 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:31:28.704 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:28.704 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:28.704 rmmod nvme_tcp 00:31:28.704 rmmod nvme_fabrics 00:31:28.704 rmmod nvme_keyring 00:31:28.704 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:28.704 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:31:28.704 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:31:28.704 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 117573 ']' 00:31:28.704 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 117573 00:31:28.704 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 117573 ']' 00:31:28.704 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 117573 00:31:28.704 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:31:28.704 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:28.704 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 117573 00:31:28.962 killing process with pid 117573 00:31:28.962 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:28.962 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:28.962 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 117573' 00:31:28.962 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 117573 00:31:28.962 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 117573 00:31:28.962 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:28.962 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:28.963 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:28.963 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:31:28.963 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:31:28.963 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:28.963 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:31:28.963 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:28.963 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:31:28.963 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:31:28.963 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:31:28.963 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:31:28.963 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:31:28.963 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:31:28.963 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:31:28.963 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:31:28.963 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:31:28.963 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:31:28.963 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:31:28.963 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:31:29.222 00:31:29.222 real 0m41.744s 00:31:29.222 user 3m0.465s 00:31:29.222 sys 0m13.243s 00:31:29.222 ************************************ 00:31:29.222 END TEST nvmf_ns_hotplug_stress 00:31:29.222 ************************************ 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:29.222 ************************************ 00:31:29.222 START TEST nvmf_delete_subsystem 00:31:29.222 ************************************ 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:29.222 * Looking for test storage... 00:31:29.222 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:31:29.222 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:31:29.484 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:31:29.484 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:29.484 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:31:29.484 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:31:29.484 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:29.484 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:29.484 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:31:29.484 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:29.484 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:29.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.484 --rc genhtml_branch_coverage=1 00:31:29.484 --rc genhtml_function_coverage=1 00:31:29.484 --rc genhtml_legend=1 00:31:29.484 --rc geninfo_all_blocks=1 00:31:29.484 --rc geninfo_unexecuted_blocks=1 00:31:29.484 00:31:29.484 ' 00:31:29.484 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:29.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.484 --rc genhtml_branch_coverage=1 00:31:29.484 --rc genhtml_function_coverage=1 00:31:29.484 --rc genhtml_legend=1 00:31:29.484 --rc geninfo_all_blocks=1 00:31:29.484 --rc geninfo_unexecuted_blocks=1 00:31:29.484 00:31:29.484 ' 00:31:29.484 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:29.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.484 --rc genhtml_branch_coverage=1 00:31:29.484 --rc genhtml_function_coverage=1 00:31:29.484 --rc genhtml_legend=1 00:31:29.484 --rc geninfo_all_blocks=1 00:31:29.484 --rc geninfo_unexecuted_blocks=1 00:31:29.484 00:31:29.484 ' 00:31:29.484 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:29.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.484 --rc genhtml_branch_coverage=1 00:31:29.484 --rc genhtml_function_coverage=1 00:31:29.484 --rc genhtml_legend=1 00:31:29.484 --rc geninfo_all_blocks=1 00:31:29.484 --rc geninfo_unexecuted_blocks=1 00:31:29.484 00:31:29.484 ' 00:31:29.484 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:29.484 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:31:29.484 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:29.484 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:29.484 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:29.484 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:29.484 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:29.484 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:29.484 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:29.484 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:29.484 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:29.484 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:29.484 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:31:29.484 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:31:29.484 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:29.484 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:29.484 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:29.484 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:29.484 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:29.484 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:31:29.484 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:29.484 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:29.484 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:31:29.485 Cannot find device "nvmf_init_br" 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:31:29.485 Cannot find device "nvmf_init_br2" 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:31:29.485 Cannot find device "nvmf_tgt_br" 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:31:29.485 Cannot find device "nvmf_tgt_br2" 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:31:29.485 Cannot find device "nvmf_init_br" 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:31:29.485 Cannot find device "nvmf_init_br2" 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:31:29.485 Cannot find device "nvmf_tgt_br" 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:31:29.485 Cannot find device "nvmf_tgt_br2" 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:31:29.485 Cannot find device "nvmf_br" 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:31:29.485 Cannot find device "nvmf_init_if" 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:31:29.485 Cannot find device "nvmf_init_if2" 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:29.485 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:29.485 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:29.485 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:29.486 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:29.486 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:29.486 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:29.486 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:31:29.486 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:31:29.486 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:31:29.486 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:31:29.486 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:31:29.486 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:31:29.486 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:31:29.486 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:31:29.486 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:31:29.486 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:29.486 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:29.486 06:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:29.486 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:31:29.486 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:31:29.747 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:31:29.747 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:31:29.747 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:29.747 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:29.747 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:29.747 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:31:29.747 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:31:29.747 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:31:29.747 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:29.747 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:31:29.747 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:31:29.747 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:29.747 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:31:29.747 00:31:29.747 --- 10.0.0.3 ping statistics --- 00:31:29.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:29.747 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:31:29.747 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:31:29.747 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:31:29.747 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:31:29.747 00:31:29.747 --- 10.0.0.4 ping statistics --- 00:31:29.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:29.747 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:31:29.747 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:29.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:29.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:31:29.747 00:31:29.747 --- 10.0.0.1 ping statistics --- 00:31:29.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:29.747 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:31:29.747 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:31:29.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:29.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.035 ms 00:31:29.747 00:31:29.747 --- 10.0.0.2 ping statistics --- 00:31:29.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:29.747 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:31:29.747 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:29.747 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@461 -- # return 0 00:31:29.747 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:29.747 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:29.747 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:29.747 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:29.747 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:29.747 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:29.748 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:29.748 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:31:29.748 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:29.748 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:29.748 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:29.748 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=120120 00:31:29.748 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 120120 00:31:29.748 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:29.748 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 120120 ']' 00:31:29.748 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:29.748 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:29.748 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:29.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:29.748 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:29.748 06:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:29.748 [2024-11-27 06:33:49.118991] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:29.748 [2024-11-27 06:33:49.119559] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:31:29.748 [2024-11-27 06:33:49.119591] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:29.748 [2024-11-27 06:33:49.231837] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:29.748 [2024-11-27 06:33:49.258818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:30.008 [2024-11-27 06:33:49.278415] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:30.008 [2024-11-27 06:33:49.278456] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:30.008 [2024-11-27 06:33:49.278462] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:30.008 [2024-11-27 06:33:49.278467] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:30.008 [2024-11-27 06:33:49.278471] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:30.008 [2024-11-27 06:33:49.279281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:30.009 [2024-11-27 06:33:49.279396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:30.009 [2024-11-27 06:33:49.334484] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:30.009 [2024-11-27 06:33:49.335346] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:30.009 [2024-11-27 06:33:49.335520] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:30.575 06:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:30.575 06:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:31:30.575 06:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:30.575 06:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:30.575 06:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:30.575 06:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:30.575 06:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:30.575 06:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.575 06:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:30.575 [2024-11-27 06:33:50.053032] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:30.575 06:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.575 06:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:30.575 06:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.575 06:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:30.575 06:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.575 06:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:31:30.575 06:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.575 06:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:30.575 [2024-11-27 06:33:50.081865] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:31:30.575 06:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.575 06:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:31:30.575 06:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.575 06:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:30.575 NULL1 00:31:30.575 06:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.575 06:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:30.575 06:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.575 06:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:30.576 Delay0 00:31:30.576 06:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.576 06:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:30.576 06:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.576 06:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:30.835 06:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.835 06:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=120174 00:31:30.835 06:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:31:30.835 06:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:30.835 [2024-11-27 06:33:50.268143] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:32.741 06:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:32.741 06:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.741 06:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 starting I/O failed: -6 00:31:33.000 Write completed with error (sct=0, sc=8) 00:31:33.000 Write completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 starting I/O failed: -6 00:31:33.000 Write completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Write completed with error (sct=0, sc=8) 00:31:33.000 starting I/O failed: -6 00:31:33.000 Write completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Write completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 starting I/O failed: -6 00:31:33.000 Write completed with error (sct=0, sc=8) 00:31:33.000 Write completed with error (sct=0, sc=8) 00:31:33.000 Write completed with error (sct=0, sc=8) 00:31:33.000 Write completed with error (sct=0, sc=8) 00:31:33.000 starting I/O failed: -6 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 starting I/O failed: -6 00:31:33.000 Write completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 starting I/O failed: -6 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 starting I/O failed: -6 00:31:33.000 Write completed with error (sct=0, sc=8) 00:31:33.000 Write completed with error (sct=0, sc=8) 00:31:33.000 Write completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 starting I/O failed: -6 00:31:33.000 Write completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 starting I/O failed: -6 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 starting I/O failed: -6 00:31:33.000 Write completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Write completed with error (sct=0, sc=8) 00:31:33.000 [2024-11-27 06:33:52.303642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf409b0 is same with the state(6) to be set 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Write completed with error (sct=0, sc=8) 00:31:33.000 Write completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Write completed with error (sct=0, sc=8) 00:31:33.000 Write completed with error (sct=0, sc=8) 00:31:33.000 Write completed with error (sct=0, sc=8) 00:31:33.000 Write completed with error (sct=0, sc=8) 00:31:33.000 Write completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Write completed with error (sct=0, sc=8) 00:31:33.000 Write completed with error (sct=0, sc=8) 00:31:33.000 Write completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Write completed with error (sct=0, sc=8) 00:31:33.000 Write completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Write completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Write completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Write completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.000 Write completed with error (sct=0, sc=8) 00:31:33.000 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 starting I/O failed: -6 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Write completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Write completed with error (sct=0, sc=8) 00:31:33.001 starting I/O failed: -6 00:31:33.001 Write completed with error (sct=0, sc=8) 00:31:33.001 Write completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 starting I/O failed: -6 00:31:33.001 Write completed with error (sct=0, sc=8) 00:31:33.001 Write completed with error (sct=0, sc=8) 00:31:33.001 Write completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 starting I/O failed: -6 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Write completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 starting I/O failed: -6 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Write completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 starting I/O failed: -6 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Write completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 starting I/O failed: -6 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Write completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 starting I/O failed: -6 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 [2024-11-27 06:33:52.318272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f35c8000c40 is same with the state(6) to be set 00:31:33.001 Write completed with error (sct=0, sc=8) 00:31:33.001 Write completed with error (sct=0, sc=8) 00:31:33.001 Write completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Write completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Write completed with error (sct=0, sc=8) 00:31:33.001 Write completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Write completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Write completed with error (sct=0, sc=8) 00:31:33.001 Write completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Write completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Write completed with error (sct=0, sc=8) 00:31:33.001 Write completed with error (sct=0, sc=8) 00:31:33.001 [2024-11-27 06:33:52.319134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15150b0 is same with the state(6) to be set 00:31:33.001 [2024-11-27 06:33:52.319163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15150b0 is same with the state(6) to be set 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Write completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 [2024-11-27 06:33:52.319498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f35c800d350 is same with the state(6) to be set 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Write completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Write completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Write completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Write completed with error (sct=0, sc=8) 00:31:33.001 Write completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Write completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Write completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Write completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 Write completed with error (sct=0, sc=8) 00:31:33.001 Read completed with error (sct=0, sc=8) 00:31:33.001 [2024-11-27 06:33:52.319919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f35c800d800 is same with the state(6) to be set 00:31:33.936 [2024-11-27 06:33:53.281425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3d7e0 is same with the state(6) to be set 00:31:33.936 Write completed with error (sct=0, sc=8) 00:31:33.936 Read completed with error (sct=0, sc=8) 00:31:33.936 Read completed with error (sct=0, sc=8) 00:31:33.936 Read completed with error (sct=0, sc=8) 00:31:33.936 Read completed with error (sct=0, sc=8) 00:31:33.936 Read completed with error (sct=0, sc=8) 00:31:33.936 Read completed with error (sct=0, sc=8) 00:31:33.936 Read completed with error (sct=0, sc=8) 00:31:33.936 Write completed with error (sct=0, sc=8) 00:31:33.936 Write completed with error (sct=0, sc=8) 00:31:33.936 Read completed with error (sct=0, sc=8) 00:31:33.936 Read completed with error (sct=0, sc=8) 00:31:33.936 Read completed with error (sct=0, sc=8) 00:31:33.936 Read completed with error (sct=0, sc=8) 00:31:33.936 Read completed with error (sct=0, sc=8) 00:31:33.936 Write completed with error (sct=0, sc=8) 00:31:33.936 Read completed with error (sct=0, sc=8) 00:31:33.936 Read completed with error (sct=0, sc=8) 00:31:33.936 Read completed with error (sct=0, sc=8) 00:31:33.936 Write completed with error (sct=0, sc=8) 00:31:33.936 Write completed with error (sct=0, sc=8) 00:31:33.936 Write completed with error (sct=0, sc=8) 00:31:33.936 Write completed with error (sct=0, sc=8) 00:31:33.936 Read completed with error (sct=0, sc=8) 00:31:33.936 Read completed with error (sct=0, sc=8) 00:31:33.937 [2024-11-27 06:33:53.304236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf40b90 is same with the state(6) to be set 00:31:33.937 Read completed with error (sct=0, sc=8) 00:31:33.937 Read completed with error (sct=0, sc=8) 00:31:33.937 Read completed with error (sct=0, sc=8) 00:31:33.937 Write completed with error (sct=0, sc=8) 00:31:33.937 Write completed with error (sct=0, sc=8) 00:31:33.937 Write completed with error (sct=0, sc=8) 00:31:33.937 Read completed with error (sct=0, sc=8) 00:31:33.937 Write completed with error (sct=0, sc=8) 00:31:33.937 Read completed with error (sct=0, sc=8) 00:31:33.937 Write completed with error (sct=0, sc=8) 00:31:33.937 Read completed with error (sct=0, sc=8) 00:31:33.937 Read completed with error (sct=0, sc=8) 00:31:33.937 Read completed with error (sct=0, sc=8) 00:31:33.937 Read completed with error (sct=0, sc=8) 00:31:33.937 Read completed with error (sct=0, sc=8) 00:31:33.937 Read completed with error (sct=0, sc=8) 00:31:33.937 Write completed with error (sct=0, sc=8) 00:31:33.937 Read completed with error (sct=0, sc=8) 00:31:33.937 Read completed with error (sct=0, sc=8) 00:31:33.937 Read completed with error (sct=0, sc=8) 00:31:33.937 Read completed with error (sct=0, sc=8) 00:31:33.937 Read completed with error (sct=0, sc=8) 00:31:33.937 Read completed with error (sct=0, sc=8) 00:31:33.937 Read completed with error (sct=0, sc=8) 00:31:33.937 Read completed with error (sct=0, sc=8) 00:31:33.937 [2024-11-27 06:33:53.304502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3faa0 is same with the state(6) to be set 00:31:33.937 Read completed with error (sct=0, sc=8) 00:31:33.937 Write completed with error (sct=0, sc=8) 00:31:33.937 Read completed with error (sct=0, sc=8) 00:31:33.937 Read completed with error (sct=0, sc=8) 00:31:33.937 Read completed with error (sct=0, sc=8) 00:31:33.937 Read completed with error (sct=0, sc=8) 00:31:33.937 Read completed with error (sct=0, sc=8) 00:31:33.937 Read completed with error (sct=0, sc=8) 00:31:33.937 Write completed with error (sct=0, sc=8) 00:31:33.937 Read completed with error (sct=0, sc=8) 00:31:33.937 [2024-11-27 06:33:53.318955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f35c800d020 is same with the state(6) to be set 00:31:33.937 Initializing NVMe Controllers 00:31:33.937 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:31:33.937 Controller IO queue size 128, less than required. 00:31:33.937 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:33.937 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:33.937 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:33.937 Initialization complete. Launching workers. 00:31:33.937 ======================================================== 00:31:33.937 Latency(us) 00:31:33.937 Device Information : IOPS MiB/s Average min max 00:31:33.937 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 171.75 0.08 890112.40 285.83 1007249.08 00:31:33.937 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 140.88 0.07 950456.24 1231.02 2002562.08 00:31:33.937 ======================================================== 00:31:33.937 Total : 312.63 0.15 917305.56 285.83 2002562.08 00:31:33.937 00:31:33.937 [2024-11-27 06:33:53.319936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf3d7e0 (9): Bad file descriptor 00:31:33.937 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:31:33.937 06:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.937 06:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:31:33.937 06:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 120174 00:31:33.937 06:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:31:34.505 06:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:31:34.505 06:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 120174 00:31:34.506 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (120174) - No such process 00:31:34.506 06:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 120174 00:31:34.506 06:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:31:34.506 06:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 120174 00:31:34.506 06:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:31:34.506 06:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:34.506 06:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:31:34.506 06:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:34.506 06:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 120174 00:31:34.506 06:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:31:34.506 06:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:34.506 06:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:34.506 06:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:34.506 06:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:34.506 06:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.506 06:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:34.506 06:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.506 06:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:31:34.506 06:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.506 06:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:34.506 [2024-11-27 06:33:53.844228] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:31:34.506 06:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.506 06:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:34.506 06:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.506 06:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:34.506 06:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.506 06:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=120220 00:31:34.506 06:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:31:34.506 06:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 120220 00:31:34.506 06:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:34.506 06:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:34.506 [2024-11-27 06:33:54.010414] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:35.071 06:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:35.071 06:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 120220 00:31:35.071 06:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:35.636 06:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:35.636 06:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 120220 00:31:35.636 06:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:35.894 06:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:35.894 06:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 120220 00:31:35.894 06:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:36.459 06:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:36.459 06:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 120220 00:31:36.459 06:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:37.024 06:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:37.024 06:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 120220 00:31:37.024 06:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:37.590 06:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:37.590 06:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 120220 00:31:37.590 06:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:37.590 Initializing NVMe Controllers 00:31:37.590 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:31:37.590 Controller IO queue size 128, less than required. 00:31:37.590 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:37.590 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:37.590 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:37.590 Initialization complete. Launching workers. 00:31:37.590 ======================================================== 00:31:37.590 Latency(us) 00:31:37.590 Device Information : IOPS MiB/s Average min max 00:31:37.590 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002548.48 1000110.69 1010894.73 00:31:37.590 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003785.05 1000113.65 1040818.09 00:31:37.590 ======================================================== 00:31:37.590 Total : 256.00 0.12 1003166.77 1000110.69 1040818.09 00:31:37.590 00:31:38.156 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:38.156 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 120220 00:31:38.156 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (120220) - No such process 00:31:38.156 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 120220 00:31:38.156 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:31:38.156 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:31:38.156 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:38.156 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:31:38.156 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:38.156 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:31:38.156 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:38.156 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:38.156 rmmod nvme_tcp 00:31:38.156 rmmod nvme_fabrics 00:31:38.156 rmmod nvme_keyring 00:31:38.156 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:38.156 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:31:38.156 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:31:38.156 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 120120 ']' 00:31:38.156 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 120120 00:31:38.156 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 120120 ']' 00:31:38.156 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 120120 00:31:38.156 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:31:38.156 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:38.156 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 120120 00:31:38.156 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:38.156 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:38.156 killing process with pid 120120 00:31:38.156 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 120120' 00:31:38.156 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 120120 00:31:38.156 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 120120 00:31:38.414 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:38.414 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:38.414 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:38.414 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:31:38.414 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:31:38.414 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:38.414 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:31:38.414 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:38.414 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:31:38.414 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:31:38.414 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:31:38.414 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:31:38.414 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:31:38.414 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:31:38.414 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:31:38.414 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:31:38.414 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:31:38.414 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:31:38.414 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:31:38.414 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:31:38.414 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:38.414 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:38.414 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:31:38.414 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:38.414 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:38.414 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:38.674 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:31:38.674 00:31:38.674 real 0m9.344s 00:31:38.674 user 0m23.818s 00:31:38.674 sys 0m1.936s 00:31:38.674 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:38.674 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:38.674 ************************************ 00:31:38.674 END TEST nvmf_delete_subsystem 00:31:38.674 ************************************ 00:31:38.674 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:38.674 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:38.674 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:38.674 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:38.674 ************************************ 00:31:38.674 START TEST nvmf_host_management 00:31:38.674 ************************************ 00:31:38.674 06:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:38.674 * Looking for test storage... 00:31:38.674 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:38.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.674 --rc genhtml_branch_coverage=1 00:31:38.674 --rc genhtml_function_coverage=1 00:31:38.674 --rc genhtml_legend=1 00:31:38.674 --rc geninfo_all_blocks=1 00:31:38.674 --rc geninfo_unexecuted_blocks=1 00:31:38.674 00:31:38.674 ' 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:38.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.674 --rc genhtml_branch_coverage=1 00:31:38.674 --rc genhtml_function_coverage=1 00:31:38.674 --rc genhtml_legend=1 00:31:38.674 --rc geninfo_all_blocks=1 00:31:38.674 --rc geninfo_unexecuted_blocks=1 00:31:38.674 00:31:38.674 ' 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:38.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.674 --rc genhtml_branch_coverage=1 00:31:38.674 --rc genhtml_function_coverage=1 00:31:38.674 --rc genhtml_legend=1 00:31:38.674 --rc geninfo_all_blocks=1 00:31:38.674 --rc geninfo_unexecuted_blocks=1 00:31:38.674 00:31:38.674 ' 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:38.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.674 --rc genhtml_branch_coverage=1 00:31:38.674 --rc genhtml_function_coverage=1 00:31:38.674 --rc genhtml_legend=1 00:31:38.674 --rc geninfo_all_blocks=1 00:31:38.674 --rc geninfo_unexecuted_blocks=1 00:31:38.674 00:31:38.674 ' 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:38.674 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:31:38.675 Cannot find device "nvmf_init_br" 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:31:38.675 Cannot find device "nvmf_init_br2" 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:31:38.675 Cannot find device "nvmf_tgt_br" 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:31:38.675 Cannot find device "nvmf_tgt_br2" 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:31:38.675 Cannot find device "nvmf_init_br" 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:31:38.675 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:31:38.937 Cannot find device "nvmf_init_br2" 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:31:38.937 Cannot find device "nvmf_tgt_br" 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:31:38.937 Cannot find device "nvmf_tgt_br2" 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:31:38.937 Cannot find device "nvmf_br" 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:31:38.937 Cannot find device "nvmf_init_if" 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:31:38.937 Cannot find device "nvmf_init_if2" 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:38.937 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:38.937 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:31:38.937 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:38.937 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:31:38.937 00:31:38.937 --- 10.0.0.3 ping statistics --- 00:31:38.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:38.937 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:31:38.937 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:31:38.937 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:31:38.937 00:31:38.937 --- 10.0.0.4 ping statistics --- 00:31:38.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:38.937 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:38.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:38.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.014 ms 00:31:38.937 00:31:38.937 --- 10.0.0.1 ping statistics --- 00:31:38.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:38.937 rtt min/avg/max/mdev = 0.014/0.014/0.014/0.000 ms 00:31:38.937 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:31:38.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:38.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.037 ms 00:31:38.937 00:31:38.937 --- 10.0.0.2 ping statistics --- 00:31:38.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:38.937 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:31:38.938 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:38.938 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:31:38.938 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:38.938 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:38.938 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:38.938 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:38.938 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:38.938 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:38.938 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:38.938 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:31:38.938 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:31:38.938 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:31:38.938 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:38.938 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:38.938 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:38.938 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=120509 00:31:38.938 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:31:38.938 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 120509 00:31:38.938 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 120509 ']' 00:31:38.938 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:38.938 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:38.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:38.938 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:38.938 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:38.938 06:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:38.938 [2024-11-27 06:33:58.448400] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:38.938 [2024-11-27 06:33:58.449156] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:31:38.938 [2024-11-27 06:33:58.449196] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:39.198 [2024-11-27 06:33:58.565833] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:39.198 [2024-11-27 06:33:58.595684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:39.198 [2024-11-27 06:33:58.615494] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:39.198 [2024-11-27 06:33:58.615662] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:39.198 [2024-11-27 06:33:58.615722] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:39.198 [2024-11-27 06:33:58.615750] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:39.198 [2024-11-27 06:33:58.615765] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:39.198 [2024-11-27 06:33:58.616428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:39.198 [2024-11-27 06:33:58.616499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:39.198 [2024-11-27 06:33:58.616747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:39.198 [2024-11-27 06:33:58.616749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:39.198 [2024-11-27 06:33:58.663489] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:39.198 [2024-11-27 06:33:58.663921] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:39.198 [2024-11-27 06:33:58.664021] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:39.198 [2024-11-27 06:33:58.664280] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:39.198 [2024-11-27 06:33:58.664982] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:40.137 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:40.137 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:31:40.137 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:40.137 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:40.137 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:40.137 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:40.137 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:40.137 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.137 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:40.137 [2024-11-27 06:33:59.357335] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:40.137 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.137 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:31:40.137 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:40.137 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:40.137 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:31:40.137 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:31:40.137 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:31:40.137 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.137 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:40.137 Malloc0 00:31:40.137 [2024-11-27 06:33:59.433808] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:31:40.137 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.137 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:31:40.137 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:40.137 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:40.137 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=120581 00:31:40.137 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 120581 /var/tmp/bdevperf.sock 00:31:40.137 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 120581 ']' 00:31:40.137 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:40.137 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:40.137 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:40.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:40.137 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:40.137 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:40.138 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:40.138 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:31:40.138 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:40.138 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:40.138 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:40.138 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:40.138 { 00:31:40.138 "params": { 00:31:40.138 "name": "Nvme$subsystem", 00:31:40.138 "trtype": "$TEST_TRANSPORT", 00:31:40.138 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:40.138 "adrfam": "ipv4", 00:31:40.138 "trsvcid": "$NVMF_PORT", 00:31:40.138 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:40.138 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:40.138 "hdgst": ${hdgst:-false}, 00:31:40.138 "ddgst": ${ddgst:-false} 00:31:40.138 }, 00:31:40.138 "method": "bdev_nvme_attach_controller" 00:31:40.138 } 00:31:40.138 EOF 00:31:40.138 )") 00:31:40.138 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:40.138 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:40.138 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:40.138 06:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:40.138 "params": { 00:31:40.138 "name": "Nvme0", 00:31:40.138 "trtype": "tcp", 00:31:40.138 "traddr": "10.0.0.3", 00:31:40.138 "adrfam": "ipv4", 00:31:40.138 "trsvcid": "4420", 00:31:40.138 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:40.138 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:40.138 "hdgst": false, 00:31:40.138 "ddgst": false 00:31:40.138 }, 00:31:40.138 "method": "bdev_nvme_attach_controller" 00:31:40.138 }' 00:31:40.138 [2024-11-27 06:33:59.504807] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:31:40.138 [2024-11-27 06:33:59.504860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120581 ] 00:31:40.138 [2024-11-27 06:33:59.620157] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:40.138 [2024-11-27 06:33:59.650259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:40.399 [2024-11-27 06:33:59.674159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:40.399 Running I/O for 10 seconds... 00:31:40.971 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:40.971 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:31:40.971 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:40.971 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.971 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:40.971 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.971 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:40.971 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:31:40.971 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:40.971 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:31:40.971 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:31:40.971 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:31:40.971 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:31:40.971 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:31:40.971 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:31:40.971 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:31:40.971 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.971 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:40.971 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.971 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1091 00:31:40.971 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1091 -ge 100 ']' 00:31:40.971 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:31:40.972 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:31:40.972 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:31:40.972 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:40.972 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.972 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:40.972 [2024-11-27 06:34:00.429618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.972 [2024-11-27 06:34:00.429665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.972 [2024-11-27 06:34:00.429682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.972 [2024-11-27 06:34:00.429689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.972 [2024-11-27 06:34:00.429698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.972 [2024-11-27 06:34:00.429704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.972 [2024-11-27 06:34:00.429712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.972 [2024-11-27 06:34:00.429718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.972 [2024-11-27 06:34:00.429726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.972 [2024-11-27 06:34:00.429731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.972 [2024-11-27 06:34:00.429739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.972 [2024-11-27 06:34:00.429744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.972 [2024-11-27 06:34:00.429752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.972 [2024-11-27 06:34:00.429758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.972 [2024-11-27 06:34:00.429765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.972 [2024-11-27 06:34:00.429771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.972 [2024-11-27 06:34:00.429778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.972 [2024-11-27 06:34:00.429784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.972 [2024-11-27 06:34:00.429791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.972 [2024-11-27 06:34:00.429796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.972 [2024-11-27 06:34:00.429804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.972 [2024-11-27 06:34:00.429810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.972 [2024-11-27 06:34:00.429817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.972 [2024-11-27 06:34:00.429823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.972 [2024-11-27 06:34:00.429831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.972 [2024-11-27 06:34:00.429836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.972 [2024-11-27 06:34:00.429843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.972 [2024-11-27 06:34:00.429849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.972 [2024-11-27 06:34:00.429856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.972 [2024-11-27 06:34:00.429862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.972 [2024-11-27 06:34:00.429869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.972 [2024-11-27 06:34:00.429875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.972 [2024-11-27 06:34:00.429882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.972 [2024-11-27 06:34:00.429888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.972 [2024-11-27 06:34:00.429897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.972 [2024-11-27 06:34:00.429916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.972 [2024-11-27 06:34:00.429926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.972 [2024-11-27 06:34:00.429932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.972 [2024-11-27 06:34:00.429940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.972 [2024-11-27 06:34:00.429946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.972 [2024-11-27 06:34:00.429954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.972 [2024-11-27 06:34:00.429959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.972 [2024-11-27 06:34:00.429967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.972 [2024-11-27 06:34:00.429973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.972 [2024-11-27 06:34:00.429980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.972 [2024-11-27 06:34:00.429986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.972 [2024-11-27 06:34:00.429993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.972 [2024-11-27 06:34:00.429999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.972 [2024-11-27 06:34:00.430007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.972 [2024-11-27 06:34:00.430013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.972 [2024-11-27 06:34:00.430023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.972 [2024-11-27 06:34:00.430029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.972 [2024-11-27 06:34:00.430037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.972 [2024-11-27 06:34:00.430045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.972 [2024-11-27 06:34:00.430052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.972 [2024-11-27 06:34:00.430058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.972 [2024-11-27 06:34:00.430066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.972 [2024-11-27 06:34:00.430072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.972 [2024-11-27 06:34:00.430080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.972 [2024-11-27 06:34:00.430086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.972 [2024-11-27 06:34:00.430093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.972 [2024-11-27 06:34:00.430099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.972 [2024-11-27 06:34:00.430106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.972 [2024-11-27 06:34:00.430113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.972 [2024-11-27 06:34:00.430121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.972 [2024-11-27 06:34:00.430127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.972 [2024-11-27 06:34:00.430137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.972 [2024-11-27 06:34:00.430144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.972 [2024-11-27 06:34:00.430153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.972 [2024-11-27 06:34:00.430158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.972 [2024-11-27 06:34:00.430167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.972 [2024-11-27 06:34:00.430173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.972 [2024-11-27 06:34:00.430181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.973 [2024-11-27 06:34:00.430187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.973 [2024-11-27 06:34:00.430195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.973 [2024-11-27 06:34:00.430201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.973 [2024-11-27 06:34:00.430208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.973 [2024-11-27 06:34:00.430214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.973 [2024-11-27 06:34:00.430222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.973 [2024-11-27 06:34:00.430228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.973 [2024-11-27 06:34:00.430235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.973 [2024-11-27 06:34:00.430241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.973 [2024-11-27 06:34:00.430248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.973 [2024-11-27 06:34:00.430254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.973 [2024-11-27 06:34:00.430262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.973 [2024-11-27 06:34:00.430267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.973 [2024-11-27 06:34:00.430276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.973 [2024-11-27 06:34:00.430282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.973 [2024-11-27 06:34:00.430289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.973 [2024-11-27 06:34:00.430295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.973 [2024-11-27 06:34:00.430303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.973 [2024-11-27 06:34:00.430309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.973 [2024-11-27 06:34:00.430317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.973 [2024-11-27 06:34:00.430322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.973 [2024-11-27 06:34:00.430331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.973 [2024-11-27 06:34:00.430337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.973 [2024-11-27 06:34:00.430345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.973 [2024-11-27 06:34:00.430350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.973 [2024-11-27 06:34:00.430358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.973 [2024-11-27 06:34:00.430363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.973 [2024-11-27 06:34:00.430371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.973 [2024-11-27 06:34:00.430377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.973 [2024-11-27 06:34:00.430385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.973 [2024-11-27 06:34:00.430390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.973 [2024-11-27 06:34:00.430398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.973 [2024-11-27 06:34:00.430404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.973 [2024-11-27 06:34:00.430412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.973 [2024-11-27 06:34:00.430418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.973 [2024-11-27 06:34:00.430427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.973 [2024-11-27 06:34:00.430432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.973 [2024-11-27 06:34:00.430440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.973 [2024-11-27 06:34:00.430446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.973 [2024-11-27 06:34:00.430454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.973 [2024-11-27 06:34:00.430460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.973 [2024-11-27 06:34:00.430468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.973 [2024-11-27 06:34:00.430474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.973 [2024-11-27 06:34:00.430481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.973 [2024-11-27 06:34:00.430488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.973 [2024-11-27 06:34:00.430495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.973 [2024-11-27 06:34:00.430501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.973 [2024-11-27 06:34:00.430508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.973 [2024-11-27 06:34:00.430514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.973 [2024-11-27 06:34:00.430521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.973 [2024-11-27 06:34:00.430528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.973 [2024-11-27 06:34:00.430537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.973 [2024-11-27 06:34:00.430542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.973 [2024-11-27 06:34:00.430551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:40.973 [2024-11-27 06:34:00.430557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.973 [2024-11-27 06:34:00.430565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d00f30 is same with the state(6) to be set 00:31:40.973 [2024-11-27 06:34:00.431765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:40.973 task offset: 23808 on job bdev=Nvme0n1 fails 00:31:40.973 00:31:40.973 Latency(us) 00:31:40.973 [2024-11-27T06:34:00.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:40.973 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:40.973 Job: Nvme0n1 ended in about 0.61 seconds with error 00:31:40.973 Verification LBA range: start 0x0 length 0x400 00:31:40.973 Nvme0n1 : 0.61 1892.52 118.28 105.14 0.00 31283.89 1594.29 29642.44 00:31:40.973 [2024-11-27T06:34:00.503Z] =================================================================================================================== 00:31:40.973 [2024-11-27T06:34:00.503Z] Total : 1892.52 118.28 105.14 0.00 31283.89 1594.29 29642.44 00:31:40.973 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.973 [2024-11-27 06:34:00.433647] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:40.973 [2024-11-27 06:34:00.433669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c85d20 (9): Bad file descriptor 00:31:40.973 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:40.973 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.973 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:40.973 [2024-11-27 06:34:00.434364] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:31:40.973 [2024-11-27 06:34:00.434424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:40.973 [2024-11-27 06:34:00.434439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.973 [2024-11-27 06:34:00.434449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:31:40.973 [2024-11-27 06:34:00.434456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:31:40.973 [2024-11-27 06:34:00.434462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.973 [2024-11-27 06:34:00.434467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c85d20 00:31:40.973 [2024-11-27 06:34:00.434490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c85d20 (9): Bad file descriptor 00:31:40.973 [2024-11-27 06:34:00.434501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:40.974 [2024-11-27 06:34:00.434507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:40.974 [2024-11-27 06:34:00.434514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:40.974 [2024-11-27 06:34:00.434521] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:40.974 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.974 06:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:31:42.353 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 120581 00:31:42.353 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (120581) - No such process 00:31:42.353 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:31:42.353 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:31:42.353 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:42.353 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:31:42.353 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:42.353 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:42.353 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:42.353 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:42.353 { 00:31:42.353 "params": { 00:31:42.353 "name": "Nvme$subsystem", 00:31:42.353 "trtype": "$TEST_TRANSPORT", 00:31:42.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:42.353 "adrfam": "ipv4", 00:31:42.353 "trsvcid": "$NVMF_PORT", 00:31:42.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:42.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:42.353 "hdgst": ${hdgst:-false}, 00:31:42.353 "ddgst": ${ddgst:-false} 00:31:42.353 }, 00:31:42.353 "method": "bdev_nvme_attach_controller" 00:31:42.353 } 00:31:42.353 EOF 00:31:42.353 )") 00:31:42.353 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:42.353 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:42.353 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:42.353 06:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:42.353 "params": { 00:31:42.353 "name": "Nvme0", 00:31:42.353 "trtype": "tcp", 00:31:42.353 "traddr": "10.0.0.3", 00:31:42.353 "adrfam": "ipv4", 00:31:42.353 "trsvcid": "4420", 00:31:42.353 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:42.353 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:42.353 "hdgst": false, 00:31:42.353 "ddgst": false 00:31:42.353 }, 00:31:42.353 "method": "bdev_nvme_attach_controller" 00:31:42.353 }' 00:31:42.353 [2024-11-27 06:34:01.489718] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:31:42.353 [2024-11-27 06:34:01.489790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120626 ] 00:31:42.354 [2024-11-27 06:34:01.604179] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:42.354 [2024-11-27 06:34:01.634255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:42.354 [2024-11-27 06:34:01.658001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:42.354 Running I/O for 1 seconds... 00:31:43.296 2048.00 IOPS, 128.00 MiB/s 00:31:43.296 Latency(us) 00:31:43.296 [2024-11-27T06:34:02.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:43.296 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:43.296 Verification LBA range: start 0x0 length 0x400 00:31:43.296 Nvme0n1 : 1.01 2092.16 130.76 0.00 0.00 30034.25 4108.60 29440.79 00:31:43.296 [2024-11-27T06:34:02.826Z] =================================================================================================================== 00:31:43.296 [2024-11-27T06:34:02.826Z] Total : 2092.16 130.76 0.00 0.00 30034.25 4108.60 29440.79 00:31:43.557 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:31:43.557 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:31:43.557 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:31:43.557 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:31:43.557 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:31:43.557 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:43.557 06:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:31:43.557 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:43.557 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:31:43.557 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:43.557 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:43.557 rmmod nvme_tcp 00:31:43.557 rmmod nvme_fabrics 00:31:43.557 rmmod nvme_keyring 00:31:43.557 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:43.557 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:31:43.557 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:31:43.557 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 120509 ']' 00:31:43.557 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 120509 00:31:43.557 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 120509 ']' 00:31:43.557 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 120509 00:31:43.557 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:31:43.557 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:43.557 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 120509 00:31:43.818 killing process with pid 120509 00:31:43.818 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:43.818 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:43.818 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 120509' 00:31:43.818 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 120509 00:31:43.818 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 120509 00:31:43.818 [2024-11-27 06:34:03.175757] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:31:43.818 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:43.818 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:43.818 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:43.818 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:31:43.818 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:31:43.818 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:43.818 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:31:43.818 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:43.818 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:31:43.818 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:31:43.818 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:31:43.818 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:31:43.818 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:31:43.818 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:31:43.818 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:31:43.818 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:31:43.818 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:31:43.818 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:31:43.818 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:31:43.818 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:31:43.818 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:44.080 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:44.080 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:31:44.080 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:44.080 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:44.080 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:44.080 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:31:44.080 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:31:44.080 00:31:44.080 real 0m5.401s 00:31:44.080 user 0m16.668s 00:31:44.080 sys 0m1.863s 00:31:44.080 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:44.080 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:44.080 ************************************ 00:31:44.080 END TEST nvmf_host_management 00:31:44.080 ************************************ 00:31:44.080 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:44.080 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:44.080 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:44.080 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:44.080 ************************************ 00:31:44.080 START TEST nvmf_lvol 00:31:44.080 ************************************ 00:31:44.080 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:44.080 * Looking for test storage... 00:31:44.080 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:44.080 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:44.080 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:31:44.080 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:44.080 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:44.080 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:44.080 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:44.080 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:44.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:44.081 --rc genhtml_branch_coverage=1 00:31:44.081 --rc genhtml_function_coverage=1 00:31:44.081 --rc genhtml_legend=1 00:31:44.081 --rc geninfo_all_blocks=1 00:31:44.081 --rc geninfo_unexecuted_blocks=1 00:31:44.081 00:31:44.081 ' 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:44.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:44.081 --rc genhtml_branch_coverage=1 00:31:44.081 --rc genhtml_function_coverage=1 00:31:44.081 --rc genhtml_legend=1 00:31:44.081 --rc geninfo_all_blocks=1 00:31:44.081 --rc geninfo_unexecuted_blocks=1 00:31:44.081 00:31:44.081 ' 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:44.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:44.081 --rc genhtml_branch_coverage=1 00:31:44.081 --rc genhtml_function_coverage=1 00:31:44.081 --rc genhtml_legend=1 00:31:44.081 --rc geninfo_all_blocks=1 00:31:44.081 --rc geninfo_unexecuted_blocks=1 00:31:44.081 00:31:44.081 ' 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:44.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:44.081 --rc genhtml_branch_coverage=1 00:31:44.081 --rc genhtml_function_coverage=1 00:31:44.081 --rc genhtml_legend=1 00:31:44.081 --rc geninfo_all_blocks=1 00:31:44.081 --rc geninfo_unexecuted_blocks=1 00:31:44.081 00:31:44.081 ' 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:44.081 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:31:44.082 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:44.082 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:44.082 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:44.082 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:44.082 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:44.082 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:44.082 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:44.082 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:44.082 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:31:44.082 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:31:44.082 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:31:44.082 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:31:44.082 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:31:44.082 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:31:44.082 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:44.082 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:31:44.082 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:31:44.082 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:31:44.082 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:44.082 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:31:44.082 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:44.082 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:31:44.082 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:44.082 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:31:44.082 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:44.082 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:44.082 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:44.082 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:44.082 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:44.082 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:44.082 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:31:44.343 Cannot find device "nvmf_init_br" 00:31:44.343 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:31:44.343 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:31:44.343 Cannot find device "nvmf_init_br2" 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:31:44.344 Cannot find device "nvmf_tgt_br" 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:31:44.344 Cannot find device "nvmf_tgt_br2" 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:31:44.344 Cannot find device "nvmf_init_br" 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:31:44.344 Cannot find device "nvmf_init_br2" 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:31:44.344 Cannot find device "nvmf_tgt_br" 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:31:44.344 Cannot find device "nvmf_tgt_br2" 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:31:44.344 Cannot find device "nvmf_br" 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:31:44.344 Cannot find device "nvmf_init_if" 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:31:44.344 Cannot find device "nvmf_init_if2" 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:44.344 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:44.344 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:44.344 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:31:44.606 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:31:44.606 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:31:44.606 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:44.606 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:31:44.606 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:31:44.606 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:44.606 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:31:44.606 00:31:44.606 --- 10.0.0.3 ping statistics --- 00:31:44.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:44.606 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:31:44.606 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:31:44.606 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:31:44.606 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:31:44.606 00:31:44.606 --- 10.0.0.4 ping statistics --- 00:31:44.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:44.606 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:31:44.606 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:44.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:44.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:31:44.606 00:31:44.606 --- 10.0.0.1 ping statistics --- 00:31:44.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:44.606 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:31:44.606 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:31:44.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:44.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:31:44.606 00:31:44.606 --- 10.0.0.2 ping statistics --- 00:31:44.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:44.606 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:31:44.606 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:44.606 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:31:44.606 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:44.606 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:44.606 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:44.606 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:44.606 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:44.606 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:44.606 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:44.606 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:31:44.606 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:44.606 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:44.606 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:44.606 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=120885 00:31:44.606 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:31:44.606 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 120885 00:31:44.606 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 120885 ']' 00:31:44.606 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:44.606 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:44.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:44.606 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:44.606 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:44.606 06:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:44.606 [2024-11-27 06:34:03.956997] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:44.606 [2024-11-27 06:34:03.957593] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:31:44.606 [2024-11-27 06:34:03.957628] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:44.606 [2024-11-27 06:34:04.070879] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:44.606 [2024-11-27 06:34:04.098546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:44.606 [2024-11-27 06:34:04.122133] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:44.606 [2024-11-27 06:34:04.122168] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:44.606 [2024-11-27 06:34:04.122175] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:44.607 [2024-11-27 06:34:04.122180] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:44.607 [2024-11-27 06:34:04.122186] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:44.607 [2024-11-27 06:34:04.122927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:44.607 [2024-11-27 06:34:04.123002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:44.607 [2024-11-27 06:34:04.123153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:44.867 [2024-11-27 06:34:04.187358] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:44.867 [2024-11-27 06:34:04.188196] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:44.867 [2024-11-27 06:34:04.188448] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:44.867 [2024-11-27 06:34:04.188707] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:45.468 06:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:45.468 06:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:31:45.468 06:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:45.468 06:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:45.468 06:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:45.468 06:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:45.468 06:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:45.754 [2024-11-27 06:34:05.044117] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:45.754 06:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:46.014 06:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:31:46.014 06:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:46.014 06:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:31:46.014 06:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:31:46.276 06:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:31:46.537 06:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=55a0ad6e-99b5-4535-ac74-a4cc710d6c85 00:31:46.537 06:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55a0ad6e-99b5-4535-ac74-a4cc710d6c85 lvol 20 00:31:46.798 06:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=40fb2871-2fad-47dc-8bd8-2ee1b6c65d39 00:31:46.798 06:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:46.798 06:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 40fb2871-2fad-47dc-8bd8-2ee1b6c65d39 00:31:47.059 06:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:31:47.321 [2024-11-27 06:34:06.696003] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:31:47.321 06:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:31:47.581 06:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=121027 00:31:47.581 06:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:31:47.581 06:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:31:48.523 06:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 40fb2871-2fad-47dc-8bd8-2ee1b6c65d39 MY_SNAPSHOT 00:31:48.783 06:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=c8fbbdb7-f7a9-4d3a-a1a8-608d90cd6c7e 00:31:48.783 06:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 40fb2871-2fad-47dc-8bd8-2ee1b6c65d39 30 00:31:49.043 06:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone c8fbbdb7-f7a9-4d3a-a1a8-608d90cd6c7e MY_CLONE 00:31:49.304 06:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f4a43518-f596-4c3b-865b-6f6300d16ed2 00:31:49.304 06:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate f4a43518-f596-4c3b-865b-6f6300d16ed2 00:31:49.565 06:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 121027 00:31:57.692 Initializing NVMe Controllers 00:31:57.692 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:31:57.692 Controller IO queue size 128, less than required. 00:31:57.692 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:57.692 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:31:57.692 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:31:57.692 Initialization complete. Launching workers. 00:31:57.692 ======================================================== 00:31:57.692 Latency(us) 00:31:57.692 Device Information : IOPS MiB/s Average min max 00:31:57.692 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15437.50 60.30 8294.95 3047.93 58806.57 00:31:57.692 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15239.50 59.53 8399.33 3828.27 62616.41 00:31:57.692 ======================================================== 00:31:57.692 Total : 30677.00 119.83 8346.80 3047.93 62616.41 00:31:57.692 00:31:57.692 06:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:57.952 06:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 40fb2871-2fad-47dc-8bd8-2ee1b6c65d39 00:31:58.211 06:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 55a0ad6e-99b5-4535-ac74-a4cc710d6c85 00:31:58.471 06:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:31:58.471 06:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:31:58.472 06:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:31:58.472 06:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:58.472 06:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:31:58.732 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:58.732 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:31:58.732 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:58.732 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:58.733 rmmod nvme_tcp 00:31:58.733 rmmod nvme_fabrics 00:31:58.733 rmmod nvme_keyring 00:31:58.733 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:58.733 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:31:58.733 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:31:58.733 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 120885 ']' 00:31:58.733 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 120885 00:31:58.733 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 120885 ']' 00:31:58.733 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 120885 00:31:58.733 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:31:58.733 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:58.733 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 120885 00:31:58.733 killing process with pid 120885 00:31:58.733 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:58.733 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:58.733 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 120885' 00:31:58.733 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 120885 00:31:58.733 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 120885 00:31:58.992 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:58.992 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:58.992 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:58.992 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:31:58.992 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:31:58.992 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:31:58.992 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:58.992 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:58.992 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:31:58.992 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:31:58.992 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:31:58.992 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:31:58.993 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:31:58.993 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:31:58.993 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:31:58.993 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:31:58.993 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:31:58.993 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:31:58.993 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:31:58.993 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:31:58.993 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:58.993 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:58.993 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:31:58.993 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.993 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:58.993 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.993 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:31:58.993 00:31:58.993 real 0m15.063s 00:31:58.993 user 0m53.277s 00:31:58.993 sys 0m5.202s 00:31:58.993 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:58.993 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:58.993 ************************************ 00:31:58.993 END TEST nvmf_lvol 00:31:58.993 ************************************ 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:59.254 ************************************ 00:31:59.254 START TEST nvmf_lvs_grow 00:31:59.254 ************************************ 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:59.254 * Looking for test storage... 00:31:59.254 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:59.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.254 --rc genhtml_branch_coverage=1 00:31:59.254 --rc genhtml_function_coverage=1 00:31:59.254 --rc genhtml_legend=1 00:31:59.254 --rc geninfo_all_blocks=1 00:31:59.254 --rc geninfo_unexecuted_blocks=1 00:31:59.254 00:31:59.254 ' 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:59.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.254 --rc genhtml_branch_coverage=1 00:31:59.254 --rc genhtml_function_coverage=1 00:31:59.254 --rc genhtml_legend=1 00:31:59.254 --rc geninfo_all_blocks=1 00:31:59.254 --rc geninfo_unexecuted_blocks=1 00:31:59.254 00:31:59.254 ' 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:59.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.254 --rc genhtml_branch_coverage=1 00:31:59.254 --rc genhtml_function_coverage=1 00:31:59.254 --rc genhtml_legend=1 00:31:59.254 --rc geninfo_all_blocks=1 00:31:59.254 --rc geninfo_unexecuted_blocks=1 00:31:59.254 00:31:59.254 ' 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:59.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.254 --rc genhtml_branch_coverage=1 00:31:59.254 --rc genhtml_function_coverage=1 00:31:59.254 --rc genhtml_legend=1 00:31:59.254 --rc geninfo_all_blocks=1 00:31:59.254 --rc geninfo_unexecuted_blocks=1 00:31:59.254 00:31:59.254 ' 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:59.254 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:31:59.255 Cannot find device "nvmf_init_br" 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:31:59.255 Cannot find device "nvmf_init_br2" 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:31:59.255 Cannot find device "nvmf_tgt_br" 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:31:59.255 Cannot find device "nvmf_tgt_br2" 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:31:59.255 Cannot find device "nvmf_init_br" 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:31:59.255 Cannot find device "nvmf_init_br2" 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:31:59.255 Cannot find device "nvmf_tgt_br" 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:31:59.255 Cannot find device "nvmf_tgt_br2" 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:31:59.255 Cannot find device "nvmf_br" 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:31:59.255 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:31:59.517 Cannot find device "nvmf_init_if" 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:31:59.517 Cannot find device "nvmf_init_if2" 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:59.517 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:59.517 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:31:59.517 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:59.517 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:31:59.517 00:31:59.517 --- 10.0.0.3 ping statistics --- 00:31:59.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.517 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:31:59.517 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:31:59.517 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:31:59.517 00:31:59.517 --- 10.0.0.4 ping statistics --- 00:31:59.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.517 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:59.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:59.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:31:59.517 00:31:59.517 --- 10.0.0.1 ping statistics --- 00:31:59.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.517 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:31:59.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:59.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 00:31:59.517 00:31:59.517 --- 10.0.0.2 ping statistics --- 00:31:59.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.517 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:31:59.517 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:59.518 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:59.518 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:59.518 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=121441 00:31:59.518 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:59.518 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 121441 00:31:59.518 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 121441 ']' 00:31:59.518 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:59.518 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:59.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:59.518 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:59.518 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:59.518 06:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:59.518 [2024-11-27 06:34:19.017705] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:59.518 [2024-11-27 06:34:19.018456] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:31:59.518 [2024-11-27 06:34:19.018494] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:59.778 [2024-11-27 06:34:19.133501] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:59.778 [2024-11-27 06:34:19.162685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:59.778 [2024-11-27 06:34:19.184721] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:59.778 [2024-11-27 06:34:19.184760] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:59.778 [2024-11-27 06:34:19.184767] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:59.778 [2024-11-27 06:34:19.184772] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:59.778 [2024-11-27 06:34:19.184777] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:59.778 [2024-11-27 06:34:19.185029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:59.778 [2024-11-27 06:34:19.249707] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:59.778 [2024-11-27 06:34:19.250154] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:00.719 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:00.719 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:32:00.719 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:00.719 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:00.719 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:00.719 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:00.719 06:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:00.719 [2024-11-27 06:34:20.105830] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:00.719 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:32:00.719 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:00.719 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:00.719 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:00.719 ************************************ 00:32:00.719 START TEST lvs_grow_clean 00:32:00.719 ************************************ 00:32:00.719 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:32:00.719 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:00.719 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:00.719 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:00.719 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:00.719 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:00.719 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:00.719 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:32:00.719 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:32:00.719 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:00.980 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:00.980 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:01.241 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=ae6b7c5a-6cf5-4d7e-b7f5-db9588cf7c58 00:32:01.241 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae6b7c5a-6cf5-4d7e-b7f5-db9588cf7c58 00:32:01.241 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:01.502 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:01.502 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:01.502 06:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ae6b7c5a-6cf5-4d7e-b7f5-db9588cf7c58 lvol 150 00:32:01.762 06:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=435d7bbd-af3d-4569-93cd-b7cdbdb0a717 00:32:01.762 06:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:32:01.762 06:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:01.762 [2024-11-27 06:34:21.281529] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:01.762 [2024-11-27 06:34:21.281676] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:01.762 true 00:32:02.023 06:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:02.023 06:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae6b7c5a-6cf5-4d7e-b7f5-db9588cf7c58 00:32:02.023 06:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:02.023 06:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:02.283 06:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 435d7bbd-af3d-4569-93cd-b7cdbdb0a717 00:32:02.543 06:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:32:02.804 [2024-11-27 06:34:22.153793] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:02.804 06:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:32:03.064 06:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=121617 00:32:03.064 06:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:03.064 06:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 121617 /var/tmp/bdevperf.sock 00:32:03.064 06:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 121617 ']' 00:32:03.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:03.064 06:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:03.064 06:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:03.064 06:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:03.064 06:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:03.064 06:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:03.064 06:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:03.064 [2024-11-27 06:34:22.406215] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:32:03.064 [2024-11-27 06:34:22.406266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121617 ] 00:32:03.064 [2024-11-27 06:34:22.517602] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:03.064 [2024-11-27 06:34:22.546412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:03.064 [2024-11-27 06:34:22.564462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:04.008 06:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:04.008 06:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:32:04.008 06:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:04.008 Nvme0n1 00:32:04.270 06:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:04.270 [ 00:32:04.270 { 00:32:04.270 "aliases": [ 00:32:04.270 "435d7bbd-af3d-4569-93cd-b7cdbdb0a717" 00:32:04.270 ], 00:32:04.270 "assigned_rate_limits": { 00:32:04.270 "r_mbytes_per_sec": 0, 00:32:04.270 "rw_ios_per_sec": 0, 00:32:04.270 "rw_mbytes_per_sec": 0, 00:32:04.270 "w_mbytes_per_sec": 0 00:32:04.270 }, 00:32:04.270 "block_size": 4096, 00:32:04.270 "claimed": false, 00:32:04.270 "driver_specific": { 00:32:04.270 "mp_policy": "active_passive", 00:32:04.270 "nvme": [ 00:32:04.270 { 00:32:04.270 "ctrlr_data": { 00:32:04.270 "ana_reporting": false, 00:32:04.270 "cntlid": 1, 00:32:04.270 "firmware_revision": "25.01", 00:32:04.270 "model_number": "SPDK bdev Controller", 00:32:04.270 "multi_ctrlr": true, 00:32:04.270 "oacs": { 00:32:04.270 "firmware": 0, 00:32:04.270 "format": 0, 00:32:04.270 "ns_manage": 0, 00:32:04.270 "security": 0 00:32:04.270 }, 00:32:04.270 "serial_number": "SPDK0", 00:32:04.270 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:04.270 "vendor_id": "0x8086" 00:32:04.270 }, 00:32:04.270 "ns_data": { 00:32:04.270 "can_share": true, 00:32:04.270 "id": 1 00:32:04.270 }, 00:32:04.270 "trid": { 00:32:04.270 "adrfam": "IPv4", 00:32:04.270 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:04.270 "traddr": "10.0.0.3", 00:32:04.270 "trsvcid": "4420", 00:32:04.270 "trtype": "TCP" 00:32:04.270 }, 00:32:04.270 "vs": { 00:32:04.270 "nvme_version": "1.3" 00:32:04.270 } 00:32:04.270 } 00:32:04.270 ] 00:32:04.270 }, 00:32:04.270 "memory_domains": [ 00:32:04.270 { 00:32:04.270 "dma_device_id": "system", 00:32:04.270 "dma_device_type": 1 00:32:04.270 } 00:32:04.270 ], 00:32:04.270 "name": "Nvme0n1", 00:32:04.270 "num_blocks": 38912, 00:32:04.270 "numa_id": -1, 00:32:04.270 "product_name": "NVMe disk", 00:32:04.270 "supported_io_types": { 00:32:04.270 "abort": true, 00:32:04.270 "compare": true, 00:32:04.270 "compare_and_write": true, 00:32:04.270 "copy": true, 00:32:04.270 "flush": true, 00:32:04.270 "get_zone_info": false, 00:32:04.270 "nvme_admin": true, 00:32:04.270 "nvme_io": true, 00:32:04.270 "nvme_io_md": false, 00:32:04.270 "nvme_iov_md": false, 00:32:04.270 "read": true, 00:32:04.270 "reset": true, 00:32:04.270 "seek_data": false, 00:32:04.270 "seek_hole": false, 00:32:04.270 "unmap": true, 00:32:04.270 "write": true, 00:32:04.270 "write_zeroes": true, 00:32:04.270 "zcopy": false, 00:32:04.270 "zone_append": false, 00:32:04.270 "zone_management": false 00:32:04.270 }, 00:32:04.270 "uuid": "435d7bbd-af3d-4569-93cd-b7cdbdb0a717", 00:32:04.270 "zoned": false 00:32:04.270 } 00:32:04.270 ] 00:32:04.270 06:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=121663 00:32:04.270 06:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:04.270 06:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:04.532 Running I/O for 10 seconds... 00:32:05.475 Latency(us) 00:32:05.475 [2024-11-27T06:34:25.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:05.475 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:05.475 Nvme0n1 : 1.00 11449.00 44.72 0.00 0.00 0.00 0.00 0.00 00:32:05.475 [2024-11-27T06:34:25.005Z] =================================================================================================================== 00:32:05.475 [2024-11-27T06:34:25.005Z] Total : 11449.00 44.72 0.00 0.00 0.00 0.00 0.00 00:32:05.475 00:32:06.419 06:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ae6b7c5a-6cf5-4d7e-b7f5-db9588cf7c58 00:32:06.419 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:06.419 Nvme0n1 : 2.00 11161.50 43.60 0.00 0.00 0.00 0.00 0.00 00:32:06.419 [2024-11-27T06:34:25.949Z] =================================================================================================================== 00:32:06.419 [2024-11-27T06:34:25.949Z] Total : 11161.50 43.60 0.00 0.00 0.00 0.00 0.00 00:32:06.419 00:32:06.680 true 00:32:06.680 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae6b7c5a-6cf5-4d7e-b7f5-db9588cf7c58 00:32:06.680 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:06.941 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:06.941 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:06.941 06:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 121663 00:32:07.512 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:07.512 Nvme0n1 : 3.00 11903.67 46.50 0.00 0.00 0.00 0.00 0.00 00:32:07.512 [2024-11-27T06:34:27.042Z] =================================================================================================================== 00:32:07.512 [2024-11-27T06:34:27.042Z] Total : 11903.67 46.50 0.00 0.00 0.00 0.00 0.00 00:32:07.512 00:32:08.450 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:08.450 Nvme0n1 : 4.00 12141.75 47.43 0.00 0.00 0.00 0.00 0.00 00:32:08.450 [2024-11-27T06:34:27.980Z] =================================================================================================================== 00:32:08.450 [2024-11-27T06:34:27.980Z] Total : 12141.75 47.43 0.00 0.00 0.00 0.00 0.00 00:32:08.450 00:32:09.384 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:09.384 Nvme0n1 : 5.00 12363.80 48.30 0.00 0.00 0.00 0.00 0.00 00:32:09.384 [2024-11-27T06:34:28.914Z] =================================================================================================================== 00:32:09.384 [2024-11-27T06:34:28.914Z] Total : 12363.80 48.30 0.00 0.00 0.00 0.00 0.00 00:32:09.384 00:32:10.315 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:10.315 Nvme0n1 : 6.00 12527.17 48.93 0.00 0.00 0.00 0.00 0.00 00:32:10.315 [2024-11-27T06:34:29.845Z] =================================================================================================================== 00:32:10.315 [2024-11-27T06:34:29.845Z] Total : 12527.17 48.93 0.00 0.00 0.00 0.00 0.00 00:32:10.315 00:32:11.761 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:11.761 Nvme0n1 : 7.00 12601.57 49.22 0.00 0.00 0.00 0.00 0.00 00:32:11.761 [2024-11-27T06:34:31.292Z] =================================================================================================================== 00:32:11.762 [2024-11-27T06:34:31.292Z] Total : 12601.57 49.22 0.00 0.00 0.00 0.00 0.00 00:32:11.762 00:32:12.354 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:12.354 Nvme0n1 : 8.00 12664.50 49.47 0.00 0.00 0.00 0.00 0.00 00:32:12.354 [2024-11-27T06:34:31.884Z] =================================================================================================================== 00:32:12.354 [2024-11-27T06:34:31.884Z] Total : 12664.50 49.47 0.00 0.00 0.00 0.00 0.00 00:32:12.354 00:32:13.729 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:13.729 Nvme0n1 : 9.00 12706.56 49.63 0.00 0.00 0.00 0.00 0.00 00:32:13.729 [2024-11-27T06:34:33.259Z] =================================================================================================================== 00:32:13.729 [2024-11-27T06:34:33.259Z] Total : 12706.56 49.63 0.00 0.00 0.00 0.00 0.00 00:32:13.729 00:32:14.662 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:14.663 Nvme0n1 : 10.00 12720.80 49.69 0.00 0.00 0.00 0.00 0.00 00:32:14.663 [2024-11-27T06:34:34.193Z] =================================================================================================================== 00:32:14.663 [2024-11-27T06:34:34.193Z] Total : 12720.80 49.69 0.00 0.00 0.00 0.00 0.00 00:32:14.663 00:32:14.663 00:32:14.663 Latency(us) 00:32:14.663 [2024-11-27T06:34:34.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:14.663 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:14.663 Nvme0n1 : 10.00 12730.62 49.73 0.00 0.00 10050.74 4637.93 24601.21 00:32:14.663 [2024-11-27T06:34:34.193Z] =================================================================================================================== 00:32:14.663 [2024-11-27T06:34:34.193Z] Total : 12730.62 49.73 0.00 0.00 10050.74 4637.93 24601.21 00:32:14.663 { 00:32:14.663 "results": [ 00:32:14.663 { 00:32:14.663 "job": "Nvme0n1", 00:32:14.663 "core_mask": "0x2", 00:32:14.663 "workload": "randwrite", 00:32:14.663 "status": "finished", 00:32:14.663 "queue_depth": 128, 00:32:14.663 "io_size": 4096, 00:32:14.663 "runtime": 10.002342, 00:32:14.663 "iops": 12730.618489149841, 00:32:14.663 "mibps": 49.72897847324157, 00:32:14.663 "io_failed": 0, 00:32:14.663 "io_timeout": 0, 00:32:14.663 "avg_latency_us": 10050.74461306489, 00:32:14.663 "min_latency_us": 4637.932307692307, 00:32:14.663 "max_latency_us": 24601.206153846153 00:32:14.663 } 00:32:14.663 ], 00:32:14.663 "core_count": 1 00:32:14.663 } 00:32:14.663 06:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 121617 00:32:14.663 06:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 121617 ']' 00:32:14.663 06:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 121617 00:32:14.663 06:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:32:14.663 06:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:14.663 06:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 121617 00:32:14.663 06:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:14.663 06:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:14.663 killing process with pid 121617 00:32:14.663 06:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 121617' 00:32:14.663 06:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 121617 00:32:14.663 Received shutdown signal, test time was about 10.000000 seconds 00:32:14.663 00:32:14.663 Latency(us) 00:32:14.663 [2024-11-27T06:34:34.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:14.663 [2024-11-27T06:34:34.193Z] =================================================================================================================== 00:32:14.663 [2024-11-27T06:34:34.193Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:14.663 06:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 121617 00:32:14.663 06:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:32:14.663 06:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:14.923 06:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae6b7c5a-6cf5-4d7e-b7f5-db9588cf7c58 00:32:14.923 06:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:15.184 06:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:15.184 06:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:32:15.184 06:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:15.444 [2024-11-27 06:34:34.797578] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:15.444 06:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae6b7c5a-6cf5-4d7e-b7f5-db9588cf7c58 00:32:15.444 06:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:32:15.444 06:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae6b7c5a-6cf5-4d7e-b7f5-db9588cf7c58 00:32:15.444 06:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:15.444 06:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:15.444 06:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:15.444 06:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:15.444 06:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:15.444 06:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:15.444 06:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:15.444 06:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:32:15.444 06:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae6b7c5a-6cf5-4d7e-b7f5-db9588cf7c58 00:32:15.704 2024/11/27 06:34:35 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:ae6b7c5a-6cf5-4d7e-b7f5-db9588cf7c58], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:32:15.704 request: 00:32:15.704 { 00:32:15.704 "method": "bdev_lvol_get_lvstores", 00:32:15.704 "params": { 00:32:15.704 "uuid": "ae6b7c5a-6cf5-4d7e-b7f5-db9588cf7c58" 00:32:15.704 } 00:32:15.704 } 00:32:15.704 Got JSON-RPC error response 00:32:15.704 GoRPCClient: error on JSON-RPC call 00:32:15.704 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:32:15.704 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:15.704 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:15.704 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:15.704 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:15.965 aio_bdev 00:32:15.965 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 435d7bbd-af3d-4569-93cd-b7cdbdb0a717 00:32:15.965 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=435d7bbd-af3d-4569-93cd-b7cdbdb0a717 00:32:15.965 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:15.965 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:32:15.965 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:15.965 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:15.965 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:15.965 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 435d7bbd-af3d-4569-93cd-b7cdbdb0a717 -t 2000 00:32:16.224 [ 00:32:16.224 { 00:32:16.224 "aliases": [ 00:32:16.224 "lvs/lvol" 00:32:16.224 ], 00:32:16.224 "assigned_rate_limits": { 00:32:16.224 "r_mbytes_per_sec": 0, 00:32:16.224 "rw_ios_per_sec": 0, 00:32:16.224 "rw_mbytes_per_sec": 0, 00:32:16.224 "w_mbytes_per_sec": 0 00:32:16.224 }, 00:32:16.224 "block_size": 4096, 00:32:16.224 "claimed": false, 00:32:16.224 "driver_specific": { 00:32:16.224 "lvol": { 00:32:16.224 "base_bdev": "aio_bdev", 00:32:16.224 "clone": false, 00:32:16.224 "esnap_clone": false, 00:32:16.224 "lvol_store_uuid": "ae6b7c5a-6cf5-4d7e-b7f5-db9588cf7c58", 00:32:16.224 "num_allocated_clusters": 38, 00:32:16.224 "snapshot": false, 00:32:16.224 "thin_provision": false 00:32:16.224 } 00:32:16.224 }, 00:32:16.224 "name": "435d7bbd-af3d-4569-93cd-b7cdbdb0a717", 00:32:16.224 "num_blocks": 38912, 00:32:16.224 "product_name": "Logical Volume", 00:32:16.224 "supported_io_types": { 00:32:16.224 "abort": false, 00:32:16.224 "compare": false, 00:32:16.224 "compare_and_write": false, 00:32:16.224 "copy": false, 00:32:16.224 "flush": false, 00:32:16.224 "get_zone_info": false, 00:32:16.224 "nvme_admin": false, 00:32:16.224 "nvme_io": false, 00:32:16.224 "nvme_io_md": false, 00:32:16.224 "nvme_iov_md": false, 00:32:16.224 "read": true, 00:32:16.224 "reset": true, 00:32:16.224 "seek_data": true, 00:32:16.224 "seek_hole": true, 00:32:16.224 "unmap": true, 00:32:16.224 "write": true, 00:32:16.224 "write_zeroes": true, 00:32:16.224 "zcopy": false, 00:32:16.224 "zone_append": false, 00:32:16.224 "zone_management": false 00:32:16.224 }, 00:32:16.224 "uuid": "435d7bbd-af3d-4569-93cd-b7cdbdb0a717", 00:32:16.224 "zoned": false 00:32:16.224 } 00:32:16.224 ] 00:32:16.225 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:32:16.225 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae6b7c5a-6cf5-4d7e-b7f5-db9588cf7c58 00:32:16.225 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:16.484 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:16.484 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:16.484 06:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae6b7c5a-6cf5-4d7e-b7f5-db9588cf7c58 00:32:16.745 06:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:16.745 06:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 435d7bbd-af3d-4569-93cd-b7cdbdb0a717 00:32:17.006 06:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ae6b7c5a-6cf5-4d7e-b7f5-db9588cf7c58 00:32:17.267 06:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:17.267 06:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:32:17.837 00:32:17.837 real 0m17.023s 00:32:17.837 user 0m16.347s 00:32:17.837 sys 0m1.754s 00:32:17.837 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:17.837 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:17.837 ************************************ 00:32:17.837 END TEST lvs_grow_clean 00:32:17.837 ************************************ 00:32:17.837 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:32:17.837 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:17.837 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:17.837 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:17.837 ************************************ 00:32:17.837 START TEST lvs_grow_dirty 00:32:17.837 ************************************ 00:32:17.837 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:32:17.837 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:17.837 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:17.837 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:17.837 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:17.837 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:17.837 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:17.837 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:32:17.837 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:32:17.837 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:18.098 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:18.098 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:18.098 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=fd1d3ce3-5657-4fa2-8854-54fb5b9f0712 00:32:18.358 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:18.358 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd1d3ce3-5657-4fa2-8854-54fb5b9f0712 00:32:18.358 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:18.358 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:18.358 06:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u fd1d3ce3-5657-4fa2-8854-54fb5b9f0712 lvol 150 00:32:18.619 06:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b6713084-5c53-44b4-bd38-8e741db90d64 00:32:18.619 06:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:32:18.619 06:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:18.880 [2024-11-27 06:34:38.273513] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:18.880 [2024-11-27 06:34:38.273636] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:18.880 true 00:32:18.880 06:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd1d3ce3-5657-4fa2-8854-54fb5b9f0712 00:32:18.880 06:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:19.139 06:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:19.139 06:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:19.400 06:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b6713084-5c53-44b4-bd38-8e741db90d64 00:32:19.400 06:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:32:19.661 [2024-11-27 06:34:39.129874] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:19.661 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:32:19.922 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=122039 00:32:19.922 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:19.922 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:19.922 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 122039 /var/tmp/bdevperf.sock 00:32:19.922 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 122039 ']' 00:32:19.922 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:19.922 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:19.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:19.922 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:19.922 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:19.922 06:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:19.922 [2024-11-27 06:34:39.401730] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:32:19.922 [2024-11-27 06:34:39.401797] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122039 ] 00:32:20.183 [2024-11-27 06:34:39.516330] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:20.183 [2024-11-27 06:34:39.541877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:20.183 [2024-11-27 06:34:39.565760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:21.125 06:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:21.125 06:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:32:21.125 06:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:21.125 Nvme0n1 00:32:21.125 06:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:21.385 [ 00:32:21.386 { 00:32:21.386 "aliases": [ 00:32:21.386 "b6713084-5c53-44b4-bd38-8e741db90d64" 00:32:21.386 ], 00:32:21.386 "assigned_rate_limits": { 00:32:21.386 "r_mbytes_per_sec": 0, 00:32:21.386 "rw_ios_per_sec": 0, 00:32:21.386 "rw_mbytes_per_sec": 0, 00:32:21.386 "w_mbytes_per_sec": 0 00:32:21.386 }, 00:32:21.386 "block_size": 4096, 00:32:21.386 "claimed": false, 00:32:21.386 "driver_specific": { 00:32:21.386 "mp_policy": "active_passive", 00:32:21.386 "nvme": [ 00:32:21.386 { 00:32:21.386 "ctrlr_data": { 00:32:21.386 "ana_reporting": false, 00:32:21.386 "cntlid": 1, 00:32:21.386 "firmware_revision": "25.01", 00:32:21.386 "model_number": "SPDK bdev Controller", 00:32:21.386 "multi_ctrlr": true, 00:32:21.386 "oacs": { 00:32:21.386 "firmware": 0, 00:32:21.386 "format": 0, 00:32:21.386 "ns_manage": 0, 00:32:21.386 "security": 0 00:32:21.386 }, 00:32:21.386 "serial_number": "SPDK0", 00:32:21.386 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:21.386 "vendor_id": "0x8086" 00:32:21.386 }, 00:32:21.386 "ns_data": { 00:32:21.386 "can_share": true, 00:32:21.386 "id": 1 00:32:21.386 }, 00:32:21.386 "trid": { 00:32:21.386 "adrfam": "IPv4", 00:32:21.386 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:21.386 "traddr": "10.0.0.3", 00:32:21.386 "trsvcid": "4420", 00:32:21.386 "trtype": "TCP" 00:32:21.386 }, 00:32:21.386 "vs": { 00:32:21.386 "nvme_version": "1.3" 00:32:21.386 } 00:32:21.386 } 00:32:21.386 ] 00:32:21.386 }, 00:32:21.386 "memory_domains": [ 00:32:21.386 { 00:32:21.386 "dma_device_id": "system", 00:32:21.386 "dma_device_type": 1 00:32:21.386 } 00:32:21.386 ], 00:32:21.386 "name": "Nvme0n1", 00:32:21.386 "num_blocks": 38912, 00:32:21.386 "numa_id": -1, 00:32:21.386 "product_name": "NVMe disk", 00:32:21.386 "supported_io_types": { 00:32:21.386 "abort": true, 00:32:21.386 "compare": true, 00:32:21.386 "compare_and_write": true, 00:32:21.386 "copy": true, 00:32:21.386 "flush": true, 00:32:21.386 "get_zone_info": false, 00:32:21.386 "nvme_admin": true, 00:32:21.386 "nvme_io": true, 00:32:21.386 "nvme_io_md": false, 00:32:21.386 "nvme_iov_md": false, 00:32:21.386 "read": true, 00:32:21.386 "reset": true, 00:32:21.386 "seek_data": false, 00:32:21.386 "seek_hole": false, 00:32:21.386 "unmap": true, 00:32:21.386 "write": true, 00:32:21.386 "write_zeroes": true, 00:32:21.386 "zcopy": false, 00:32:21.386 "zone_append": false, 00:32:21.386 "zone_management": false 00:32:21.386 }, 00:32:21.386 "uuid": "b6713084-5c53-44b4-bd38-8e741db90d64", 00:32:21.386 "zoned": false 00:32:21.386 } 00:32:21.386 ] 00:32:21.386 06:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:21.386 06:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=122087 00:32:21.386 06:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:21.386 Running I/O for 10 seconds... 00:32:22.328 Latency(us) 00:32:22.328 [2024-11-27T06:34:41.858Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:22.328 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:22.328 Nvme0n1 : 1.00 10555.00 41.23 0.00 0.00 0.00 0.00 0.00 00:32:22.328 [2024-11-27T06:34:41.858Z] =================================================================================================================== 00:32:22.328 [2024-11-27T06:34:41.858Z] Total : 10555.00 41.23 0.00 0.00 0.00 0.00 0.00 00:32:22.328 00:32:23.269 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fd1d3ce3-5657-4fa2-8854-54fb5b9f0712 00:32:23.528 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:23.528 Nvme0n1 : 2.00 11646.50 45.49 0.00 0.00 0.00 0.00 0.00 00:32:23.528 [2024-11-27T06:34:43.058Z] =================================================================================================================== 00:32:23.528 [2024-11-27T06:34:43.058Z] Total : 11646.50 45.49 0.00 0.00 0.00 0.00 0.00 00:32:23.528 00:32:23.528 true 00:32:23.528 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:23.528 06:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd1d3ce3-5657-4fa2-8854-54fb5b9f0712 00:32:23.788 06:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:23.788 06:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:23.788 06:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 122087 00:32:24.356 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:24.356 Nvme0n1 : 3.00 12321.67 48.13 0.00 0.00 0.00 0.00 0.00 00:32:24.356 [2024-11-27T06:34:43.886Z] =================================================================================================================== 00:32:24.356 [2024-11-27T06:34:43.886Z] Total : 12321.67 48.13 0.00 0.00 0.00 0.00 0.00 00:32:24.356 00:32:25.736 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:25.736 Nvme0n1 : 4.00 12634.75 49.35 0.00 0.00 0.00 0.00 0.00 00:32:25.736 [2024-11-27T06:34:45.266Z] =================================================================================================================== 00:32:25.736 [2024-11-27T06:34:45.266Z] Total : 12634.75 49.35 0.00 0.00 0.00 0.00 0.00 00:32:25.736 00:32:26.305 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:26.305 Nvme0n1 : 5.00 12761.20 49.85 0.00 0.00 0.00 0.00 0.00 00:32:26.305 [2024-11-27T06:34:45.835Z] =================================================================================================================== 00:32:26.305 [2024-11-27T06:34:45.835Z] Total : 12761.20 49.85 0.00 0.00 0.00 0.00 0.00 00:32:26.305 00:32:27.701 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:27.701 Nvme0n1 : 6.00 12822.50 50.09 0.00 0.00 0.00 0.00 0.00 00:32:27.701 [2024-11-27T06:34:47.231Z] =================================================================================================================== 00:32:27.701 [2024-11-27T06:34:47.231Z] Total : 12822.50 50.09 0.00 0.00 0.00 0.00 0.00 00:32:27.701 00:32:28.652 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:28.652 Nvme0n1 : 7.00 12862.57 50.24 0.00 0.00 0.00 0.00 0.00 00:32:28.652 [2024-11-27T06:34:48.182Z] =================================================================================================================== 00:32:28.652 [2024-11-27T06:34:48.182Z] Total : 12862.57 50.24 0.00 0.00 0.00 0.00 0.00 00:32:28.652 00:32:29.593 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:29.593 Nvme0n1 : 8.00 12920.75 50.47 0.00 0.00 0.00 0.00 0.00 00:32:29.593 [2024-11-27T06:34:49.124Z] =================================================================================================================== 00:32:29.594 [2024-11-27T06:34:49.124Z] Total : 12920.75 50.47 0.00 0.00 0.00 0.00 0.00 00:32:29.594 00:32:30.527 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:30.527 Nvme0n1 : 9.00 12931.11 50.51 0.00 0.00 0.00 0.00 0.00 00:32:30.527 [2024-11-27T06:34:50.057Z] =================================================================================================================== 00:32:30.527 [2024-11-27T06:34:50.057Z] Total : 12931.11 50.51 0.00 0.00 0.00 0.00 0.00 00:32:30.527 00:32:31.461 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:31.461 Nvme0n1 : 10.00 12094.30 47.24 0.00 0.00 0.00 0.00 0.00 00:32:31.461 [2024-11-27T06:34:50.991Z] =================================================================================================================== 00:32:31.461 [2024-11-27T06:34:50.991Z] Total : 12094.30 47.24 0.00 0.00 0.00 0.00 0.00 00:32:31.461 00:32:31.461 00:32:31.461 Latency(us) 00:32:31.461 [2024-11-27T06:34:50.991Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:31.461 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:31.462 Nvme0n1 : 10.01 12096.03 47.25 0.00 0.00 10578.86 4209.43 583976.17 00:32:31.462 [2024-11-27T06:34:50.992Z] =================================================================================================================== 00:32:31.462 [2024-11-27T06:34:50.992Z] Total : 12096.03 47.25 0.00 0.00 10578.86 4209.43 583976.17 00:32:31.462 { 00:32:31.462 "results": [ 00:32:31.462 { 00:32:31.462 "job": "Nvme0n1", 00:32:31.462 "core_mask": "0x2", 00:32:31.462 "workload": "randwrite", 00:32:31.462 "status": "finished", 00:32:31.462 "queue_depth": 128, 00:32:31.462 "io_size": 4096, 00:32:31.462 "runtime": 10.00915, 00:32:31.462 "iops": 12096.0321306005, 00:32:31.462 "mibps": 47.250125510158206, 00:32:31.462 "io_failed": 0, 00:32:31.462 "io_timeout": 0, 00:32:31.462 "avg_latency_us": 10578.856589451962, 00:32:31.462 "min_latency_us": 4209.427692307692, 00:32:31.462 "max_latency_us": 583976.1723076924 00:32:31.462 } 00:32:31.462 ], 00:32:31.462 "core_count": 1 00:32:31.462 } 00:32:31.462 06:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 122039 00:32:31.462 06:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 122039 ']' 00:32:31.462 06:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 122039 00:32:31.462 06:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:32:31.462 06:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:31.462 06:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 122039 00:32:31.462 06:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:31.462 06:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:31.462 killing process with pid 122039 00:32:31.462 06:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 122039' 00:32:31.462 06:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 122039 00:32:31.462 Received shutdown signal, test time was about 10.000000 seconds 00:32:31.462 00:32:31.462 Latency(us) 00:32:31.462 [2024-11-27T06:34:50.992Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:31.462 [2024-11-27T06:34:50.992Z] =================================================================================================================== 00:32:31.462 [2024-11-27T06:34:50.992Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:31.462 06:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 122039 00:32:31.462 06:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:32:31.720 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:31.977 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:31.978 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd1d3ce3-5657-4fa2-8854-54fb5b9f0712 00:32:32.236 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:32.236 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:32:32.236 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 121441 00:32:32.236 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 121441 00:32:32.236 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 121441 Killed "${NVMF_APP[@]}" "$@" 00:32:32.236 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:32:32.236 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:32:32.236 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:32.236 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:32.236 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:32.236 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=122243 00:32:32.236 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 122243 00:32:32.236 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 122243 ']' 00:32:32.236 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:32.236 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:32.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:32.236 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:32.236 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:32.236 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:32.236 06:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:32.236 [2024-11-27 06:34:51.656547] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:32.236 [2024-11-27 06:34:51.657305] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:32:32.236 [2024-11-27 06:34:51.657352] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:32.495 [2024-11-27 06:34:51.773101] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:32.495 [2024-11-27 06:34:51.802150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:32.495 [2024-11-27 06:34:51.820215] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:32.495 [2024-11-27 06:34:51.820251] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:32.495 [2024-11-27 06:34:51.820259] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:32.495 [2024-11-27 06:34:51.820266] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:32.495 [2024-11-27 06:34:51.820273] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:32.495 [2024-11-27 06:34:51.820546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:32.495 [2024-11-27 06:34:51.867922] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:32.495 [2024-11-27 06:34:51.868218] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:33.084 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:33.084 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:32:33.084 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:33.084 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:33.084 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:33.084 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:33.084 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:33.346 [2024-11-27 06:34:52.719822] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:32:33.346 [2024-11-27 06:34:52.720978] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:32:33.346 [2024-11-27 06:34:52.721498] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:32:33.346 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:32:33.346 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b6713084-5c53-44b4-bd38-8e741db90d64 00:32:33.346 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=b6713084-5c53-44b4-bd38-8e741db90d64 00:32:33.346 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:33.346 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:32:33.346 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:33.346 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:33.346 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:33.606 06:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b6713084-5c53-44b4-bd38-8e741db90d64 -t 2000 00:32:33.866 [ 00:32:33.866 { 00:32:33.866 "aliases": [ 00:32:33.866 "lvs/lvol" 00:32:33.866 ], 00:32:33.866 "assigned_rate_limits": { 00:32:33.866 "r_mbytes_per_sec": 0, 00:32:33.866 "rw_ios_per_sec": 0, 00:32:33.866 "rw_mbytes_per_sec": 0, 00:32:33.866 "w_mbytes_per_sec": 0 00:32:33.866 }, 00:32:33.866 "block_size": 4096, 00:32:33.866 "claimed": false, 00:32:33.866 "driver_specific": { 00:32:33.866 "lvol": { 00:32:33.866 "base_bdev": "aio_bdev", 00:32:33.866 "clone": false, 00:32:33.866 "esnap_clone": false, 00:32:33.866 "lvol_store_uuid": "fd1d3ce3-5657-4fa2-8854-54fb5b9f0712", 00:32:33.866 "num_allocated_clusters": 38, 00:32:33.866 "snapshot": false, 00:32:33.866 "thin_provision": false 00:32:33.866 } 00:32:33.866 }, 00:32:33.866 "name": "b6713084-5c53-44b4-bd38-8e741db90d64", 00:32:33.866 "num_blocks": 38912, 00:32:33.866 "product_name": "Logical Volume", 00:32:33.866 "supported_io_types": { 00:32:33.866 "abort": false, 00:32:33.866 "compare": false, 00:32:33.866 "compare_and_write": false, 00:32:33.866 "copy": false, 00:32:33.866 "flush": false, 00:32:33.866 "get_zone_info": false, 00:32:33.866 "nvme_admin": false, 00:32:33.866 "nvme_io": false, 00:32:33.866 "nvme_io_md": false, 00:32:33.866 "nvme_iov_md": false, 00:32:33.866 "read": true, 00:32:33.866 "reset": true, 00:32:33.866 "seek_data": true, 00:32:33.866 "seek_hole": true, 00:32:33.866 "unmap": true, 00:32:33.866 "write": true, 00:32:33.866 "write_zeroes": true, 00:32:33.866 "zcopy": false, 00:32:33.866 "zone_append": false, 00:32:33.866 "zone_management": false 00:32:33.866 }, 00:32:33.866 "uuid": "b6713084-5c53-44b4-bd38-8e741db90d64", 00:32:33.866 "zoned": false 00:32:33.866 } 00:32:33.866 ] 00:32:33.866 06:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:32:33.866 06:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd1d3ce3-5657-4fa2-8854-54fb5b9f0712 00:32:33.866 06:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:32:34.128 06:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:32:34.128 06:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:32:34.128 06:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd1d3ce3-5657-4fa2-8854-54fb5b9f0712 00:32:34.389 06:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:32:34.389 06:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:34.389 [2024-11-27 06:34:53.857163] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:34.389 06:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd1d3ce3-5657-4fa2-8854-54fb5b9f0712 00:32:34.389 06:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:32:34.389 06:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd1d3ce3-5657-4fa2-8854-54fb5b9f0712 00:32:34.389 06:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:34.389 06:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:34.389 06:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:34.389 06:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:34.389 06:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:34.389 06:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:34.389 06:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:34.389 06:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:32:34.389 06:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd1d3ce3-5657-4fa2-8854-54fb5b9f0712 00:32:34.650 2024/11/27 06:34:54 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:fd1d3ce3-5657-4fa2-8854-54fb5b9f0712], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:32:34.650 request: 00:32:34.650 { 00:32:34.650 "method": "bdev_lvol_get_lvstores", 00:32:34.650 "params": { 00:32:34.650 "uuid": "fd1d3ce3-5657-4fa2-8854-54fb5b9f0712" 00:32:34.650 } 00:32:34.650 } 00:32:34.650 Got JSON-RPC error response 00:32:34.650 GoRPCClient: error on JSON-RPC call 00:32:34.650 06:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:32:34.650 06:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:34.650 06:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:34.650 06:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:34.650 06:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:34.910 aio_bdev 00:32:34.910 06:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b6713084-5c53-44b4-bd38-8e741db90d64 00:32:34.910 06:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=b6713084-5c53-44b4-bd38-8e741db90d64 00:32:34.910 06:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:34.910 06:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:32:34.910 06:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:34.910 06:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:34.910 06:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:35.171 06:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b6713084-5c53-44b4-bd38-8e741db90d64 -t 2000 00:32:35.432 [ 00:32:35.432 { 00:32:35.432 "aliases": [ 00:32:35.432 "lvs/lvol" 00:32:35.432 ], 00:32:35.432 "assigned_rate_limits": { 00:32:35.432 "r_mbytes_per_sec": 0, 00:32:35.432 "rw_ios_per_sec": 0, 00:32:35.432 "rw_mbytes_per_sec": 0, 00:32:35.432 "w_mbytes_per_sec": 0 00:32:35.432 }, 00:32:35.432 "block_size": 4096, 00:32:35.432 "claimed": false, 00:32:35.432 "driver_specific": { 00:32:35.432 "lvol": { 00:32:35.432 "base_bdev": "aio_bdev", 00:32:35.432 "clone": false, 00:32:35.432 "esnap_clone": false, 00:32:35.432 "lvol_store_uuid": "fd1d3ce3-5657-4fa2-8854-54fb5b9f0712", 00:32:35.432 "num_allocated_clusters": 38, 00:32:35.432 "snapshot": false, 00:32:35.432 "thin_provision": false 00:32:35.432 } 00:32:35.432 }, 00:32:35.432 "name": "b6713084-5c53-44b4-bd38-8e741db90d64", 00:32:35.432 "num_blocks": 38912, 00:32:35.432 "product_name": "Logical Volume", 00:32:35.432 "supported_io_types": { 00:32:35.432 "abort": false, 00:32:35.432 "compare": false, 00:32:35.432 "compare_and_write": false, 00:32:35.432 "copy": false, 00:32:35.432 "flush": false, 00:32:35.432 "get_zone_info": false, 00:32:35.432 "nvme_admin": false, 00:32:35.432 "nvme_io": false, 00:32:35.432 "nvme_io_md": false, 00:32:35.432 "nvme_iov_md": false, 00:32:35.432 "read": true, 00:32:35.432 "reset": true, 00:32:35.432 "seek_data": true, 00:32:35.432 "seek_hole": true, 00:32:35.432 "unmap": true, 00:32:35.432 "write": true, 00:32:35.432 "write_zeroes": true, 00:32:35.432 "zcopy": false, 00:32:35.432 "zone_append": false, 00:32:35.432 "zone_management": false 00:32:35.432 }, 00:32:35.432 "uuid": "b6713084-5c53-44b4-bd38-8e741db90d64", 00:32:35.432 "zoned": false 00:32:35.432 } 00:32:35.432 ] 00:32:35.432 06:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:32:35.432 06:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:35.432 06:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd1d3ce3-5657-4fa2-8854-54fb5b9f0712 00:32:35.692 06:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:35.692 06:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd1d3ce3-5657-4fa2-8854-54fb5b9f0712 00:32:35.692 06:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:35.952 06:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:35.952 06:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b6713084-5c53-44b4-bd38-8e741db90d64 00:32:36.213 06:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fd1d3ce3-5657-4fa2-8854-54fb5b9f0712 00:32:36.213 06:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:36.473 06:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:32:37.044 00:32:37.044 real 0m19.097s 00:32:37.044 user 0m27.703s 00:32:37.044 sys 0m5.504s 00:32:37.044 06:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:37.044 ************************************ 00:32:37.044 END TEST lvs_grow_dirty 00:32:37.044 ************************************ 00:32:37.044 06:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:37.044 06:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:32:37.044 06:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:32:37.044 06:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:32:37.044 06:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:32:37.044 06:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:37.044 06:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:32:37.044 06:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:32:37.044 06:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:32:37.044 06:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:37.044 nvmf_trace.0 00:32:37.044 06:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:32:37.044 06:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:32:37.044 06:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:37.044 06:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:32:38.429 06:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:38.429 06:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:32:38.429 06:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:38.429 06:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:38.429 rmmod nvme_tcp 00:32:38.429 rmmod nvme_fabrics 00:32:38.429 rmmod nvme_keyring 00:32:38.429 06:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:38.429 06:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:32:38.429 06:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:32:38.429 06:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 122243 ']' 00:32:38.429 06:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 122243 00:32:38.429 06:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 122243 ']' 00:32:38.429 06:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 122243 00:32:38.429 06:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:32:38.429 06:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:38.429 06:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 122243 00:32:38.429 06:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:38.429 06:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:38.429 killing process with pid 122243 00:32:38.429 06:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 122243' 00:32:38.429 06:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 122243 00:32:38.429 06:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 122243 00:32:38.429 06:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:38.429 06:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:38.429 06:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:38.429 06:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:32:38.429 06:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:38.429 06:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:32:38.429 06:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:32:38.429 06:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:38.429 06:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:32:38.429 06:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:32:38.429 06:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:32:38.690 06:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:32:38.690 06:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:32:38.690 06:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:32:38.690 06:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:32:38.690 06:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:32:38.690 06:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:32:38.690 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:32:38.690 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:32:38.690 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:32:38.690 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:38.690 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:38.690 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:32:38.690 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:38.690 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:38.690 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:38.690 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:32:38.690 00:32:38.690 real 0m39.616s 00:32:38.690 user 0m44.965s 00:32:38.690 sys 0m9.227s 00:32:38.690 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:38.690 ************************************ 00:32:38.690 END TEST nvmf_lvs_grow 00:32:38.690 ************************************ 00:32:38.690 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:38.690 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:38.690 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:38.690 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:38.690 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:38.690 ************************************ 00:32:38.690 START TEST nvmf_bdev_io_wait 00:32:38.690 ************************************ 00:32:38.690 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:38.952 * Looking for test storage... 00:32:38.952 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:38.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.952 --rc genhtml_branch_coverage=1 00:32:38.952 --rc genhtml_function_coverage=1 00:32:38.952 --rc genhtml_legend=1 00:32:38.952 --rc geninfo_all_blocks=1 00:32:38.952 --rc geninfo_unexecuted_blocks=1 00:32:38.952 00:32:38.952 ' 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:38.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.952 --rc genhtml_branch_coverage=1 00:32:38.952 --rc genhtml_function_coverage=1 00:32:38.952 --rc genhtml_legend=1 00:32:38.952 --rc geninfo_all_blocks=1 00:32:38.952 --rc geninfo_unexecuted_blocks=1 00:32:38.952 00:32:38.952 ' 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:38.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.952 --rc genhtml_branch_coverage=1 00:32:38.952 --rc genhtml_function_coverage=1 00:32:38.952 --rc genhtml_legend=1 00:32:38.952 --rc geninfo_all_blocks=1 00:32:38.952 --rc geninfo_unexecuted_blocks=1 00:32:38.952 00:32:38.952 ' 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:38.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.952 --rc genhtml_branch_coverage=1 00:32:38.952 --rc genhtml_function_coverage=1 00:32:38.952 --rc genhtml_legend=1 00:32:38.952 --rc geninfo_all_blocks=1 00:32:38.952 --rc geninfo_unexecuted_blocks=1 00:32:38.952 00:32:38.952 ' 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:38.952 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:32:38.953 Cannot find device "nvmf_init_br" 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:32:38.953 Cannot find device "nvmf_init_br2" 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:32:38.953 Cannot find device "nvmf_tgt_br" 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:32:38.953 Cannot find device "nvmf_tgt_br2" 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:32:38.953 Cannot find device "nvmf_init_br" 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:32:38.953 Cannot find device "nvmf_init_br2" 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:32:38.953 Cannot find device "nvmf_tgt_br" 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:32:38.953 Cannot find device "nvmf_tgt_br2" 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:32:38.953 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:32:39.214 Cannot find device "nvmf_br" 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:32:39.214 Cannot find device "nvmf_init_if" 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:32:39.214 Cannot find device "nvmf_init_if2" 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:39.214 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:39.214 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:39.214 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:32:39.475 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:32:39.475 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:39.475 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:32:39.475 00:32:39.475 --- 10.0.0.3 ping statistics --- 00:32:39.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:39.475 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:32:39.475 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:32:39.475 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:32:39.475 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:32:39.475 00:32:39.475 --- 10.0.0.4 ping statistics --- 00:32:39.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:39.475 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:32:39.475 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:39.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:39.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:32:39.475 00:32:39.475 --- 10.0.0.1 ping statistics --- 00:32:39.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:39.475 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:32:39.475 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:32:39.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:39.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:32:39.475 00:32:39.475 --- 10.0.0.2 ping statistics --- 00:32:39.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:39.475 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:32:39.475 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:39.475 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:32:39.475 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:39.475 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:39.475 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:39.475 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:39.475 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:39.475 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:39.475 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:39.475 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:32:39.475 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:39.475 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:39.475 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:39.475 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=122706 00:32:39.475 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 122706 00:32:39.475 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 122706 ']' 00:32:39.475 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:39.475 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:39.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:39.475 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:39.475 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:39.476 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:32:39.476 06:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:39.476 [2024-11-27 06:34:58.834748] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:39.476 [2024-11-27 06:34:58.835412] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:32:39.476 [2024-11-27 06:34:58.835456] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:39.476 [2024-11-27 06:34:58.955711] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:39.476 [2024-11-27 06:34:58.976712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:39.736 [2024-11-27 06:34:59.008142] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:39.736 [2024-11-27 06:34:59.008193] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:39.736 [2024-11-27 06:34:59.008200] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:39.736 [2024-11-27 06:34:59.008205] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:39.736 [2024-11-27 06:34:59.008210] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:39.736 [2024-11-27 06:34:59.009358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:39.736 [2024-11-27 06:34:59.009794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:39.736 [2024-11-27 06:34:59.010673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:39.736 [2024-11-27 06:34:59.010801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:39.736 [2024-11-27 06:34:59.011372] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:40.307 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:40.307 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:32:40.307 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:40.307 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:40.307 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:40.307 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:40.307 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:32:40.307 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.307 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:40.307 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.307 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:32:40.307 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.307 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:40.568 [2024-11-27 06:34:59.868147] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:40.568 [2024-11-27 06:34:59.869767] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:40.568 [2024-11-27 06:34:59.869817] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:40.568 [2024-11-27 06:34:59.869877] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:40.568 [2024-11-27 06:34:59.875605] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:40.568 Malloc0 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:40.568 [2024-11-27 06:34:59.935842] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=122765 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=122767 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=122768 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:40.568 { 00:32:40.568 "params": { 00:32:40.568 "name": "Nvme$subsystem", 00:32:40.568 "trtype": "$TEST_TRANSPORT", 00:32:40.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:40.568 "adrfam": "ipv4", 00:32:40.568 "trsvcid": "$NVMF_PORT", 00:32:40.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:40.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:40.568 "hdgst": ${hdgst:-false}, 00:32:40.568 "ddgst": ${ddgst:-false} 00:32:40.568 }, 00:32:40.568 "method": "bdev_nvme_attach_controller" 00:32:40.568 } 00:32:40.568 EOF 00:32:40.568 )") 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:40.568 { 00:32:40.568 "params": { 00:32:40.568 "name": "Nvme$subsystem", 00:32:40.568 "trtype": "$TEST_TRANSPORT", 00:32:40.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:40.568 "adrfam": "ipv4", 00:32:40.568 "trsvcid": "$NVMF_PORT", 00:32:40.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:40.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:40.568 "hdgst": ${hdgst:-false}, 00:32:40.568 "ddgst": ${ddgst:-false} 00:32:40.568 }, 00:32:40.568 "method": "bdev_nvme_attach_controller" 00:32:40.568 } 00:32:40.568 EOF 00:32:40.568 )") 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=122770 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:40.568 { 00:32:40.568 "params": { 00:32:40.568 "name": "Nvme$subsystem", 00:32:40.568 "trtype": "$TEST_TRANSPORT", 00:32:40.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:40.568 "adrfam": "ipv4", 00:32:40.568 "trsvcid": "$NVMF_PORT", 00:32:40.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:40.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:40.568 "hdgst": ${hdgst:-false}, 00:32:40.568 "ddgst": ${ddgst:-false} 00:32:40.568 }, 00:32:40.568 "method": "bdev_nvme_attach_controller" 00:32:40.568 } 00:32:40.568 EOF 00:32:40.568 )") 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:40.568 { 00:32:40.568 "params": { 00:32:40.568 "name": "Nvme$subsystem", 00:32:40.568 "trtype": "$TEST_TRANSPORT", 00:32:40.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:40.568 "adrfam": "ipv4", 00:32:40.568 "trsvcid": "$NVMF_PORT", 00:32:40.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:40.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:40.568 "hdgst": ${hdgst:-false}, 00:32:40.568 "ddgst": ${ddgst:-false} 00:32:40.568 }, 00:32:40.568 "method": "bdev_nvme_attach_controller" 00:32:40.568 } 00:32:40.568 EOF 00:32:40.568 )") 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:40.568 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:40.568 "params": { 00:32:40.568 "name": "Nvme1", 00:32:40.568 "trtype": "tcp", 00:32:40.568 "traddr": "10.0.0.3", 00:32:40.568 "adrfam": "ipv4", 00:32:40.568 "trsvcid": "4420", 00:32:40.568 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:40.568 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:40.568 "hdgst": false, 00:32:40.568 "ddgst": false 00:32:40.568 }, 00:32:40.568 "method": "bdev_nvme_attach_controller" 00:32:40.568 }' 00:32:40.569 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:40.569 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:40.569 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:40.569 "params": { 00:32:40.569 "name": "Nvme1", 00:32:40.569 "trtype": "tcp", 00:32:40.569 "traddr": "10.0.0.3", 00:32:40.569 "adrfam": "ipv4", 00:32:40.569 "trsvcid": "4420", 00:32:40.569 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:40.569 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:40.569 "hdgst": false, 00:32:40.569 "ddgst": false 00:32:40.569 }, 00:32:40.569 "method": "bdev_nvme_attach_controller" 00:32:40.569 }' 00:32:40.569 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:40.569 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:40.569 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:40.569 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:40.569 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:40.569 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:40.569 "params": { 00:32:40.569 "name": "Nvme1", 00:32:40.569 "trtype": "tcp", 00:32:40.569 "traddr": "10.0.0.3", 00:32:40.569 "adrfam": "ipv4", 00:32:40.569 "trsvcid": "4420", 00:32:40.569 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:40.569 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:40.569 "hdgst": false, 00:32:40.569 "ddgst": false 00:32:40.569 }, 00:32:40.569 "method": "bdev_nvme_attach_controller" 00:32:40.569 }' 00:32:40.569 06:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:40.569 "params": { 00:32:40.569 "name": "Nvme1", 00:32:40.569 "trtype": "tcp", 00:32:40.569 "traddr": "10.0.0.3", 00:32:40.569 "adrfam": "ipv4", 00:32:40.569 "trsvcid": "4420", 00:32:40.569 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:40.569 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:40.569 "hdgst": false, 00:32:40.569 "ddgst": false 00:32:40.569 }, 00:32:40.569 "method": "bdev_nvme_attach_controller" 00:32:40.569 }' 00:32:40.569 [2024-11-27 06:34:59.980634] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:32:40.569 [2024-11-27 06:34:59.980710] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:32:40.569 [2024-11-27 06:34:59.982311] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:32:40.569 [2024-11-27 06:34:59.982358] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:32:40.569 [2024-11-27 06:34:59.982375] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:32:40.569 [2024-11-27 06:34:59.982409] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:32:40.569 [2024-11-27 06:34:59.983483] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:32:40.569 [2024-11-27 06:34:59.983550] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:32:40.569 06:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 122765 00:32:40.829 [2024-11-27 06:35:00.166522] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:40.829 [2024-11-27 06:35:00.194535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:40.829 [2024-11-27 06:35:00.211804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:40.829 [2024-11-27 06:35:00.225118] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:40.829 [2024-11-27 06:35:00.253814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:40.829 [2024-11-27 06:35:00.268809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:40.829 [2024-11-27 06:35:00.287432] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:40.829 [2024-11-27 06:35:00.315163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:40.829 [2024-11-27 06:35:00.332294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:40.829 [2024-11-27 06:35:00.351080] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:40.829 Running I/O for 1 seconds... 00:32:41.089 [2024-11-27 06:35:00.377552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:41.089 Running I/O for 1 seconds... 00:32:41.089 [2024-11-27 06:35:00.393765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:41.089 Running I/O for 1 seconds... 00:32:41.089 Running I/O for 1 seconds... 00:32:42.026 8128.00 IOPS, 31.75 MiB/s 00:32:42.026 Latency(us) 00:32:42.026 [2024-11-27T06:35:01.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:42.027 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:32:42.027 Nvme1n1 : 1.02 8125.28 31.74 0.00 0.00 15589.27 4411.08 25306.98 00:32:42.027 [2024-11-27T06:35:01.557Z] =================================================================================================================== 00:32:42.027 [2024-11-27T06:35:01.557Z] Total : 8125.28 31.74 0.00 0.00 15589.27 4411.08 25306.98 00:32:42.027 11268.00 IOPS, 44.02 MiB/s 00:32:42.027 Latency(us) 00:32:42.027 [2024-11-27T06:35:01.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:42.027 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:32:42.027 Nvme1n1 : 1.01 11315.91 44.20 0.00 0.00 11267.51 5797.42 22181.42 00:32:42.027 [2024-11-27T06:35:01.557Z] =================================================================================================================== 00:32:42.027 [2024-11-27T06:35:01.557Z] Total : 11315.91 44.20 0.00 0.00 11267.51 5797.42 22181.42 00:32:42.027 193584.00 IOPS, 756.19 MiB/s 00:32:42.027 Latency(us) 00:32:42.027 [2024-11-27T06:35:01.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:42.027 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:32:42.027 Nvme1n1 : 1.00 193226.22 754.79 0.00 0.00 658.57 281.99 1827.45 00:32:42.027 [2024-11-27T06:35:01.557Z] =================================================================================================================== 00:32:42.027 [2024-11-27T06:35:01.557Z] Total : 193226.22 754.79 0.00 0.00 658.57 281.99 1827.45 00:32:42.027 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 122767 00:32:42.027 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 122768 00:32:42.027 9020.00 IOPS, 35.23 MiB/s 00:32:42.027 Latency(us) 00:32:42.027 [2024-11-27T06:35:01.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:42.027 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:32:42.027 Nvme1n1 : 1.01 9152.63 35.75 0.00 0.00 13952.80 3302.01 33675.42 00:32:42.027 [2024-11-27T06:35:01.557Z] =================================================================================================================== 00:32:42.027 [2024-11-27T06:35:01.557Z] Total : 9152.63 35.75 0.00 0.00 13952.80 3302.01 33675.42 00:32:42.027 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 122770 00:32:42.287 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:42.287 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.287 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:42.287 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.287 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:32:42.287 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:32:42.287 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:42.287 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:32:42.287 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:42.287 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:32:42.287 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:42.287 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:42.287 rmmod nvme_tcp 00:32:42.287 rmmod nvme_fabrics 00:32:42.287 rmmod nvme_keyring 00:32:42.287 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:42.287 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:32:42.287 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:32:42.287 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 122706 ']' 00:32:42.287 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 122706 00:32:42.287 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 122706 ']' 00:32:42.287 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 122706 00:32:42.287 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:32:42.287 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:42.287 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 122706 00:32:42.287 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:42.287 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:42.287 killing process with pid 122706 00:32:42.287 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 122706' 00:32:42.287 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 122706 00:32:42.287 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 122706 00:32:42.547 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:42.547 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:42.547 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:42.547 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:32:42.547 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:42.547 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:32:42.547 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:32:42.547 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:42.547 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:32:42.547 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:32:42.547 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:32:42.547 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:32:42.547 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:32:42.547 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:32:42.547 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:32:42.547 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:32:42.547 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:32:42.548 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:32:42.548 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:32:42.548 06:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:32:42.548 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:42.548 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:42.548 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:32:42.548 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:42.548 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:42.548 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:42.808 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:32:42.809 00:32:42.809 real 0m3.861s 00:32:42.809 user 0m11.773s 00:32:42.809 sys 0m1.985s 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:42.809 ************************************ 00:32:42.809 END TEST nvmf_bdev_io_wait 00:32:42.809 ************************************ 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:42.809 ************************************ 00:32:42.809 START TEST nvmf_queue_depth 00:32:42.809 ************************************ 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:42.809 * Looking for test storage... 00:32:42.809 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:42.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:42.809 --rc genhtml_branch_coverage=1 00:32:42.809 --rc genhtml_function_coverage=1 00:32:42.809 --rc genhtml_legend=1 00:32:42.809 --rc geninfo_all_blocks=1 00:32:42.809 --rc geninfo_unexecuted_blocks=1 00:32:42.809 00:32:42.809 ' 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:42.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:42.809 --rc genhtml_branch_coverage=1 00:32:42.809 --rc genhtml_function_coverage=1 00:32:42.809 --rc genhtml_legend=1 00:32:42.809 --rc geninfo_all_blocks=1 00:32:42.809 --rc geninfo_unexecuted_blocks=1 00:32:42.809 00:32:42.809 ' 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:42.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:42.809 --rc genhtml_branch_coverage=1 00:32:42.809 --rc genhtml_function_coverage=1 00:32:42.809 --rc genhtml_legend=1 00:32:42.809 --rc geninfo_all_blocks=1 00:32:42.809 --rc geninfo_unexecuted_blocks=1 00:32:42.809 00:32:42.809 ' 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:42.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:42.809 --rc genhtml_branch_coverage=1 00:32:42.809 --rc genhtml_function_coverage=1 00:32:42.809 --rc genhtml_legend=1 00:32:42.809 --rc geninfo_all_blocks=1 00:32:42.809 --rc geninfo_unexecuted_blocks=1 00:32:42.809 00:32:42.809 ' 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.809 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:42.810 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:32:43.071 Cannot find device "nvmf_init_br" 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:32:43.071 Cannot find device "nvmf_init_br2" 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:32:43.071 Cannot find device "nvmf_tgt_br" 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:32:43.071 Cannot find device "nvmf_tgt_br2" 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:32:43.071 Cannot find device "nvmf_init_br" 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:32:43.071 Cannot find device "nvmf_init_br2" 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:32:43.071 Cannot find device "nvmf_tgt_br" 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:32:43.071 Cannot find device "nvmf_tgt_br2" 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:32:43.071 Cannot find device "nvmf_br" 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:32:43.071 Cannot find device "nvmf_init_if" 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:32:43.071 Cannot find device "nvmf_init_if2" 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:43.071 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:43.071 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:32:43.071 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:32:43.333 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:43.333 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:32:43.333 00:32:43.333 --- 10.0.0.3 ping statistics --- 00:32:43.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:43.333 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:32:43.333 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:32:43.333 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.028 ms 00:32:43.333 00:32:43.333 --- 10.0.0.4 ping statistics --- 00:32:43.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:43.333 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:43.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:43.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms 00:32:43.333 00:32:43.333 --- 10.0.0.1 ping statistics --- 00:32:43.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:43.333 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:32:43.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:43.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:32:43.333 00:32:43.333 --- 10.0.0.2 ping statistics --- 00:32:43.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:43.333 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:43.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=123024 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 123024 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 123024 ']' 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:43.333 06:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:43.333 [2024-11-27 06:35:02.768766] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:43.333 [2024-11-27 06:35:02.769569] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:32:43.333 [2024-11-27 06:35:02.769612] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:43.593 [2024-11-27 06:35:02.887271] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:43.593 [2024-11-27 06:35:02.917337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:43.593 [2024-11-27 06:35:02.947078] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:43.593 [2024-11-27 06:35:02.947131] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:43.593 [2024-11-27 06:35:02.947138] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:43.593 [2024-11-27 06:35:02.947143] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:43.593 [2024-11-27 06:35:02.947148] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:43.593 [2024-11-27 06:35:02.947454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:43.593 [2024-11-27 06:35:03.016728] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:43.593 [2024-11-27 06:35:03.017009] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:44.162 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:44.163 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:32:44.163 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:44.163 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:44.163 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:44.424 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:44.424 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:44.424 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.424 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:44.424 [2024-11-27 06:35:03.704218] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:44.424 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.424 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:44.424 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.424 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:44.424 Malloc0 00:32:44.424 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.424 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:44.424 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.424 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:44.424 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.424 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:44.424 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.424 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:44.424 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.424 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:44.424 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.424 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:44.424 [2024-11-27 06:35:03.756134] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:44.424 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.424 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=123074 00:32:44.424 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:44.424 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 123074 /var/tmp/bdevperf.sock 00:32:44.424 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 123074 ']' 00:32:44.424 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:32:44.424 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:44.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:44.424 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:44.424 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:44.424 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:44.424 06:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:44.424 [2024-11-27 06:35:03.799629] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:32:44.424 [2024-11-27 06:35:03.799686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123074 ] 00:32:44.424 [2024-11-27 06:35:03.914037] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:44.424 [2024-11-27 06:35:03.944633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:44.685 [2024-11-27 06:35:03.964778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:44.685 06:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:44.685 06:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:32:44.685 06:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:44.685 06:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.685 06:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:44.685 NVMe0n1 00:32:44.685 06:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.685 06:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:44.685 Running I/O for 10 seconds... 00:32:47.012 8192.00 IOPS, 32.00 MiB/s [2024-11-27T06:35:07.481Z] 8223.00 IOPS, 32.12 MiB/s [2024-11-27T06:35:08.414Z] 8435.33 IOPS, 32.95 MiB/s [2024-11-27T06:35:09.348Z] 9237.25 IOPS, 36.08 MiB/s [2024-11-27T06:35:10.283Z] 9962.00 IOPS, 38.91 MiB/s [2024-11-27T06:35:11.217Z] 10413.50 IOPS, 40.68 MiB/s [2024-11-27T06:35:12.591Z] 10703.00 IOPS, 41.81 MiB/s [2024-11-27T06:35:13.526Z] 10991.75 IOPS, 42.94 MiB/s [2024-11-27T06:35:14.463Z] 11171.11 IOPS, 43.64 MiB/s [2024-11-27T06:35:14.463Z] 11370.10 IOPS, 44.41 MiB/s 00:32:54.933 Latency(us) 00:32:54.933 [2024-11-27T06:35:14.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:54.933 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:32:54.933 Verification LBA range: start 0x0 length 0x4000 00:32:54.933 NVMe0n1 : 10.06 11395.89 44.52 0.00 0.00 89545.06 22887.19 76626.71 00:32:54.933 [2024-11-27T06:35:14.463Z] =================================================================================================================== 00:32:54.933 [2024-11-27T06:35:14.463Z] Total : 11395.89 44.52 0.00 0.00 89545.06 22887.19 76626.71 00:32:54.933 { 00:32:54.933 "results": [ 00:32:54.933 { 00:32:54.933 "job": "NVMe0n1", 00:32:54.933 "core_mask": "0x1", 00:32:54.933 "workload": "verify", 00:32:54.933 "status": "finished", 00:32:54.933 "verify_range": { 00:32:54.933 "start": 0, 00:32:54.933 "length": 16384 00:32:54.933 }, 00:32:54.933 "queue_depth": 1024, 00:32:54.933 "io_size": 4096, 00:32:54.933 "runtime": 10.060996, 00:32:54.933 "iops": 11395.889631603073, 00:32:54.933 "mibps": 44.515193873449505, 00:32:54.933 "io_failed": 0, 00:32:54.933 "io_timeout": 0, 00:32:54.933 "avg_latency_us": 89545.05566194477, 00:32:54.933 "min_latency_us": 22887.187692307692, 00:32:54.933 "max_latency_us": 76626.7076923077 00:32:54.933 } 00:32:54.933 ], 00:32:54.933 "core_count": 1 00:32:54.933 } 00:32:54.933 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 123074 00:32:54.933 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 123074 ']' 00:32:54.933 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 123074 00:32:54.933 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:32:54.933 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:54.933 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 123074 00:32:54.933 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:54.933 killing process with pid 123074 00:32:54.933 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:54.933 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 123074' 00:32:54.933 Received shutdown signal, test time was about 10.000000 seconds 00:32:54.933 00:32:54.933 Latency(us) 00:32:54.933 [2024-11-27T06:35:14.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:54.933 [2024-11-27T06:35:14.463Z] =================================================================================================================== 00:32:54.933 [2024-11-27T06:35:14.463Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:54.933 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 123074 00:32:54.933 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 123074 00:32:54.933 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:32:54.933 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:32:54.933 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:54.933 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:32:55.193 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:55.193 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:32:55.193 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:55.193 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:55.193 rmmod nvme_tcp 00:32:55.193 rmmod nvme_fabrics 00:32:55.193 rmmod nvme_keyring 00:32:55.193 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:55.193 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:32:55.193 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:32:55.193 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 123024 ']' 00:32:55.193 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 123024 00:32:55.193 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 123024 ']' 00:32:55.193 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 123024 00:32:55.193 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:32:55.193 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:55.193 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 123024 00:32:55.193 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:55.193 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:55.193 killing process with pid 123024 00:32:55.193 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 123024' 00:32:55.193 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 123024 00:32:55.193 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 123024 00:32:55.193 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:55.193 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:55.193 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:55.193 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:32:55.193 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:32:55.193 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:55.193 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:32:55.194 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:55.194 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:32:55.194 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:32:55.194 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:32:55.194 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:32:55.194 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:32:55.453 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:32:55.453 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:32:55.453 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:32:55.453 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:32:55.453 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:32:55.453 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:32:55.453 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:32:55.453 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:55.453 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:55.453 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:32:55.453 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:55.453 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:55.453 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:55.453 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:32:55.453 00:32:55.453 real 0m12.742s 00:32:55.453 user 0m20.292s 00:32:55.453 sys 0m2.180s 00:32:55.453 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:55.453 ************************************ 00:32:55.453 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:55.453 END TEST nvmf_queue_depth 00:32:55.453 ************************************ 00:32:55.453 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:55.453 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:55.453 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:55.453 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:55.453 ************************************ 00:32:55.453 START TEST nvmf_target_multipath 00:32:55.453 ************************************ 00:32:55.453 06:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:55.716 * Looking for test storage... 00:32:55.716 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:55.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:55.716 --rc genhtml_branch_coverage=1 00:32:55.716 --rc genhtml_function_coverage=1 00:32:55.716 --rc genhtml_legend=1 00:32:55.716 --rc geninfo_all_blocks=1 00:32:55.716 --rc geninfo_unexecuted_blocks=1 00:32:55.716 00:32:55.716 ' 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:55.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:55.716 --rc genhtml_branch_coverage=1 00:32:55.716 --rc genhtml_function_coverage=1 00:32:55.716 --rc genhtml_legend=1 00:32:55.716 --rc geninfo_all_blocks=1 00:32:55.716 --rc geninfo_unexecuted_blocks=1 00:32:55.716 00:32:55.716 ' 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:55.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:55.716 --rc genhtml_branch_coverage=1 00:32:55.716 --rc genhtml_function_coverage=1 00:32:55.716 --rc genhtml_legend=1 00:32:55.716 --rc geninfo_all_blocks=1 00:32:55.716 --rc geninfo_unexecuted_blocks=1 00:32:55.716 00:32:55.716 ' 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:55.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:55.716 --rc genhtml_branch_coverage=1 00:32:55.716 --rc genhtml_function_coverage=1 00:32:55.716 --rc genhtml_legend=1 00:32:55.716 --rc geninfo_all_blocks=1 00:32:55.716 --rc geninfo_unexecuted_blocks=1 00:32:55.716 00:32:55.716 ' 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:55.716 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:32:55.717 Cannot find device "nvmf_init_br" 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:32:55.717 Cannot find device "nvmf_init_br2" 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:32:55.717 Cannot find device "nvmf_tgt_br" 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:32:55.717 Cannot find device "nvmf_tgt_br2" 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:32:55.717 Cannot find device "nvmf_init_br" 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:32:55.717 Cannot find device "nvmf_init_br2" 00:32:55.717 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:32:55.718 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:32:55.718 Cannot find device "nvmf_tgt_br" 00:32:55.718 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:32:55.718 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:32:55.718 Cannot find device "nvmf_tgt_br2" 00:32:55.718 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:32:55.718 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:32:55.718 Cannot find device "nvmf_br" 00:32:55.718 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:32:55.718 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:32:55.718 Cannot find device "nvmf_init_if" 00:32:55.718 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:32:55.718 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:32:55.718 Cannot find device "nvmf_init_if2" 00:32:55.718 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:32:55.718 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:55.718 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:55.718 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:32:55.718 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:55.718 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:55.718 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:32:55.718 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:32:55.718 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:55.718 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:32:55.718 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:55.979 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:55.979 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:55.979 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:55.979 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:55.979 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:32:55.979 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:32:55.979 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:32:55.979 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:32:55.979 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:32:55.979 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:32:55.979 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:32:55.979 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:32:55.979 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:32:55.979 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:55.979 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:55.979 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:55.979 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:32:55.979 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:32:55.979 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:32:55.979 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:32:55.979 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:55.979 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:55.979 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:55.980 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:32:55.980 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:32:55.980 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:32:55.980 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:55.980 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:32:55.980 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:32:55.980 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:55.980 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:32:55.980 00:32:55.980 --- 10.0.0.3 ping statistics --- 00:32:55.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:55.980 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:32:55.980 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:32:55.980 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:32:55.980 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:32:55.980 00:32:55.980 --- 10.0.0.4 ping statistics --- 00:32:55.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:55.980 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:32:55.980 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:55.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:55.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:32:55.980 00:32:55.980 --- 10.0.0.1 ping statistics --- 00:32:55.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:55.980 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:32:55.980 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:32:55.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:55.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.033 ms 00:32:55.980 00:32:55.980 --- 10.0.0.2 ping statistics --- 00:32:55.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:55.980 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:32:55.980 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:55.980 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:32:55.980 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:55.980 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:55.980 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:55.980 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:55.980 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:55.980 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:55.980 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:55.980 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:32:55.980 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:32:55.980 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:32:55.980 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:55.980 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:55.980 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:55.980 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=123434 00:32:55.980 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 123434 00:32:55.980 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 123434 ']' 00:32:55.980 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:55.980 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:55.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:55.980 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:55.980 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:55.980 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:55.980 06:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:56.240 [2024-11-27 06:35:15.510490] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:56.240 [2024-11-27 06:35:15.511242] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:32:56.240 [2024-11-27 06:35:15.511284] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:56.240 [2024-11-27 06:35:15.628976] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:56.240 [2024-11-27 06:35:15.649920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:56.240 [2024-11-27 06:35:15.677994] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:56.240 [2024-11-27 06:35:15.678046] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:56.240 [2024-11-27 06:35:15.678052] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:56.240 [2024-11-27 06:35:15.678057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:56.240 [2024-11-27 06:35:15.678062] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:56.240 [2024-11-27 06:35:15.678956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:56.240 [2024-11-27 06:35:15.679039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:56.240 [2024-11-27 06:35:15.680140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:56.240 [2024-11-27 06:35:15.680273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:56.240 [2024-11-27 06:35:15.757635] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:56.240 [2024-11-27 06:35:15.757754] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:56.240 [2024-11-27 06:35:15.758517] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:56.240 [2024-11-27 06:35:15.758538] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:56.240 [2024-11-27 06:35:15.759226] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:57.181 06:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:57.181 06:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:32:57.181 06:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:57.181 06:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:57.181 06:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:57.181 06:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:57.181 06:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:57.181 [2024-11-27 06:35:16.637066] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:57.181 06:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:57.441 Malloc0 00:32:57.441 06:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:32:57.702 06:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:57.962 06:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:58.222 [2024-11-27 06:35:17.545002] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:58.222 06:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:32:58.484 [2024-11-27 06:35:17.760880] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:32:58.484 06:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:32:58.484 06:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:32:58.745 06:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:32:58.745 06:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:32:58.745 06:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:58.745 06:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:58.745 06:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:33:00.696 06:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:33:00.696 06:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:33:00.696 06:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:33:00.696 06:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:33:00.696 06:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:33:00.696 06:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:33:00.696 06:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:33:00.696 06:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:33:00.696 06:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:33:00.696 06:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:33:00.696 06:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:33:00.696 06:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:33:00.696 06:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:33:00.696 06:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:33:00.696 06:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:33:00.696 06:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:33:00.696 06:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:33:00.696 06:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:33:00.696 06:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:33:00.696 06:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:33:00.696 06:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:33:00.696 06:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:00.696 06:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:33:00.696 06:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:33:00.696 06:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:33:00.696 06:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:33:00.696 06:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:33:00.696 06:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:00.696 06:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:33:00.696 06:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:33:00.696 06:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:33:00.696 06:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:33:00.696 06:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=123565 00:33:00.696 06:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:33:00.696 06:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:33:00.696 [global] 00:33:00.696 thread=1 00:33:00.696 invalidate=1 00:33:00.696 rw=randrw 00:33:00.696 time_based=1 00:33:00.696 runtime=6 00:33:00.696 ioengine=libaio 00:33:00.696 direct=1 00:33:00.696 bs=4096 00:33:00.696 iodepth=128 00:33:00.696 norandommap=0 00:33:00.696 numjobs=1 00:33:00.696 00:33:00.696 verify_dump=1 00:33:00.696 verify_backlog=512 00:33:00.696 verify_state_save=0 00:33:00.696 do_verify=1 00:33:00.696 verify=crc32c-intel 00:33:00.696 [job0] 00:33:00.696 filename=/dev/nvme0n1 00:33:00.696 Could not set queue depth (nvme0n1) 00:33:00.696 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:00.696 fio-3.35 00:33:00.696 Starting 1 thread 00:33:01.647 06:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:33:01.906 06:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:33:02.164 06:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:33:02.164 06:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:33:02.164 06:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:02.164 06:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:33:02.164 06:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:33:02.164 06:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:33:02.164 06:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:33:02.164 06:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:33:02.164 06:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:02.164 06:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:33:02.164 06:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:33:02.164 06:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:33:02.164 06:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:33:03.098 06:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:33:03.098 06:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:33:03.098 06:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:33:03.098 06:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:33:03.356 06:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:33:03.614 06:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:33:03.614 06:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:33:03.614 06:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:03.614 06:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:33:03.614 06:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:33:03.614 06:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:33:03.614 06:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:33:03.614 06:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:33:03.614 06:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:03.614 06:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:33:03.614 06:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:33:03.614 06:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:33:03.614 06:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:33:04.548 06:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:33:04.548 06:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:33:04.548 06:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:33:04.548 06:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 123565 00:33:07.077 00:33:07.077 job0: (groupid=0, jobs=1): err= 0: pid=123586: Wed Nov 27 06:35:26 2024 00:33:07.077 read: IOPS=15.0k, BW=58.7MiB/s (61.6MB/s)(352MiB/6002msec) 00:33:07.077 slat (nsec): min=1465, max=5264.0k, avg=39030.30, stdev=205187.37 00:33:07.077 clat (usec): min=918, max=13256, avg=5721.55, stdev=1031.51 00:33:07.077 lat (usec): min=925, max=13268, avg=5760.58, stdev=1044.88 00:33:07.077 clat percentiles (usec): 00:33:07.077 | 1.00th=[ 3490], 5.00th=[ 4293], 10.00th=[ 4686], 20.00th=[ 5080], 00:33:07.077 | 30.00th=[ 5276], 40.00th=[ 5407], 50.00th=[ 5538], 60.00th=[ 5735], 00:33:07.077 | 70.00th=[ 5932], 80.00th=[ 6325], 90.00th=[ 7111], 95.00th=[ 7701], 00:33:07.077 | 99.00th=[ 8979], 99.50th=[ 9896], 99.90th=[10814], 99.95th=[11338], 00:33:07.077 | 99.99th=[12256] 00:33:07.077 bw ( KiB/s): min=18432, max=38610, per=51.68%, avg=31076.55, stdev=7374.08, samples=11 00:33:07.077 iops : min= 4608, max= 9652, avg=7769.09, stdev=1843.47, samples=11 00:33:07.077 write: IOPS=8971, BW=35.0MiB/s (36.7MB/s)(186MiB/5293msec); 0 zone resets 00:33:07.077 slat (usec): min=2, max=4115, avg=47.22, stdev=122.25 00:33:07.078 clat (usec): min=816, max=11405, avg=5160.26, stdev=783.65 00:33:07.078 lat (usec): min=834, max=11423, avg=5207.48, stdev=786.82 00:33:07.078 clat percentiles (usec): 00:33:07.078 | 1.00th=[ 2900], 5.00th=[ 4080], 10.00th=[ 4490], 20.00th=[ 4752], 00:33:07.078 | 30.00th=[ 4883], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5211], 00:33:07.078 | 70.00th=[ 5342], 80.00th=[ 5538], 90.00th=[ 6063], 95.00th=[ 6652], 00:33:07.078 | 99.00th=[ 7504], 99.50th=[ 8094], 99.90th=[ 9372], 99.95th=[ 9896], 00:33:07.078 | 99.99th=[10814] 00:33:07.078 bw ( KiB/s): min=19424, max=39009, per=86.79%, avg=31147.00, stdev=7168.84, samples=11 00:33:07.078 iops : min= 4856, max= 9752, avg=7786.73, stdev=1792.18, samples=11 00:33:07.078 lat (usec) : 1000=0.01% 00:33:07.078 lat (msec) : 2=0.05%, 4=3.40%, 10=96.22%, 20=0.32% 00:33:07.078 cpu : usr=3.33%, sys=17.29%, ctx=10436, majf=0, minf=114 00:33:07.078 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:33:07.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:07.078 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:07.078 issued rwts: total=90235,47488,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:07.078 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:07.078 00:33:07.078 Run status group 0 (all jobs): 00:33:07.078 READ: bw=58.7MiB/s (61.6MB/s), 58.7MiB/s-58.7MiB/s (61.6MB/s-61.6MB/s), io=352MiB (370MB), run=6002-6002msec 00:33:07.078 WRITE: bw=35.0MiB/s (36.7MB/s), 35.0MiB/s-35.0MiB/s (36.7MB/s-36.7MB/s), io=186MiB (195MB), run=5293-5293msec 00:33:07.078 00:33:07.078 Disk stats (read/write): 00:33:07.078 nvme0n1: ios=88886/46703, merge=0/0, ticks=483920/231386, in_queue=715306, util=99.28% 00:33:07.078 06:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:33:07.078 06:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:33:07.336 06:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:33:07.336 06:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:33:07.336 06:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:07.336 06:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:33:07.336 06:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:33:07.336 06:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:33:07.336 06:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:33:07.336 06:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:33:07.336 06:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:07.336 06:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:33:07.336 06:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:33:07.336 06:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:33:07.336 06:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:33:08.271 06:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:33:08.271 06:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:33:08.271 06:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:33:08.271 06:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:33:08.271 06:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=123714 00:33:08.271 06:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:33:08.271 06:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:33:08.271 [global] 00:33:08.271 thread=1 00:33:08.271 invalidate=1 00:33:08.271 rw=randrw 00:33:08.271 time_based=1 00:33:08.271 runtime=6 00:33:08.271 ioengine=libaio 00:33:08.271 direct=1 00:33:08.271 bs=4096 00:33:08.271 iodepth=128 00:33:08.271 norandommap=0 00:33:08.271 numjobs=1 00:33:08.271 00:33:08.271 verify_dump=1 00:33:08.271 verify_backlog=512 00:33:08.271 verify_state_save=0 00:33:08.271 do_verify=1 00:33:08.271 verify=crc32c-intel 00:33:08.271 [job0] 00:33:08.271 filename=/dev/nvme0n1 00:33:08.271 Could not set queue depth (nvme0n1) 00:33:08.530 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:08.530 fio-3.35 00:33:08.530 Starting 1 thread 00:33:09.470 06:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:33:09.470 06:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:33:09.728 06:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:33:09.728 06:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:33:09.728 06:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:09.728 06:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:33:09.728 06:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:33:09.728 06:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:33:09.728 06:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:33:09.728 06:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:33:09.728 06:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:09.728 06:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:33:09.729 06:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:33:09.729 06:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:33:09.729 06:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:33:10.662 06:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:33:10.662 06:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:33:10.662 06:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:33:10.662 06:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:33:10.920 06:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:33:11.177 06:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:33:11.177 06:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:33:11.177 06:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:11.177 06:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:33:11.177 06:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:33:11.177 06:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:33:11.177 06:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:33:11.177 06:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:33:11.177 06:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:11.177 06:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:33:11.177 06:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:33:11.177 06:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:33:11.177 06:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:33:12.111 06:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:33:12.111 06:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:33:12.111 06:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:33:12.111 06:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 123714 00:33:14.640 00:33:14.640 job0: (groupid=0, jobs=1): err= 0: pid=123738: Wed Nov 27 06:35:33 2024 00:33:14.640 read: IOPS=17.4k, BW=68.1MiB/s (71.5MB/s)(409MiB/6005msec) 00:33:14.640 slat (usec): min=2, max=7716, avg=29.99, stdev=175.41 00:33:14.640 clat (usec): min=193, max=14754, avg=5024.96, stdev=1255.25 00:33:14.640 lat (usec): min=200, max=14765, avg=5054.96, stdev=1269.39 00:33:14.640 clat percentiles (usec): 00:33:14.640 | 1.00th=[ 1516], 5.00th=[ 2606], 10.00th=[ 3228], 20.00th=[ 4228], 00:33:14.640 | 30.00th=[ 4817], 40.00th=[ 5080], 50.00th=[ 5211], 60.00th=[ 5342], 00:33:14.640 | 70.00th=[ 5538], 80.00th=[ 5800], 90.00th=[ 6259], 95.00th=[ 6980], 00:33:14.640 | 99.00th=[ 8160], 99.50th=[ 8455], 99.90th=[ 9241], 99.95th=[ 9765], 00:33:14.640 | 99.99th=[13173] 00:33:14.640 bw ( KiB/s): min=20256, max=55568, per=50.14%, avg=34992.00, stdev=10586.46, samples=12 00:33:14.640 iops : min= 5064, max=13892, avg=8748.00, stdev=2646.61, samples=12 00:33:14.640 write: IOPS=10.1k, BW=39.6MiB/s (41.6MB/s)(205MiB/5184msec); 0 zone resets 00:33:14.640 slat (usec): min=8, max=3707, avg=36.01, stdev=94.95 00:33:14.640 clat (usec): min=194, max=9442, avg=4480.51, stdev=1126.88 00:33:14.640 lat (usec): min=217, max=9455, avg=4516.52, stdev=1135.45 00:33:14.640 clat percentiles (usec): 00:33:14.640 | 1.00th=[ 1401], 5.00th=[ 2040], 10.00th=[ 2573], 20.00th=[ 3621], 00:33:14.640 | 30.00th=[ 4490], 40.00th=[ 4686], 50.00th=[ 4817], 60.00th=[ 4948], 00:33:14.640 | 70.00th=[ 5080], 80.00th=[ 5211], 90.00th=[ 5407], 95.00th=[ 5604], 00:33:14.640 | 99.00th=[ 6980], 99.50th=[ 7439], 99.90th=[ 8455], 99.95th=[ 8586], 00:33:14.640 | 99.99th=[ 8848] 00:33:14.640 bw ( KiB/s): min=20960, max=55976, per=86.31%, avg=35030.00, stdev=10178.84, samples=12 00:33:14.640 iops : min= 5240, max=13994, avg=8757.50, stdev=2544.71, samples=12 00:33:14.640 lat (usec) : 250=0.01%, 500=0.06%, 750=0.12%, 1000=0.16% 00:33:14.640 lat (msec) : 2=2.81%, 4=16.08%, 10=80.74%, 20=0.02% 00:33:14.640 cpu : usr=3.85%, sys=18.10%, ctx=12521, majf=0, minf=90 00:33:14.640 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:33:14.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:14.640 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:14.640 issued rwts: total=104764,52602,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:14.640 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:14.640 00:33:14.640 Run status group 0 (all jobs): 00:33:14.640 READ: bw=68.1MiB/s (71.5MB/s), 68.1MiB/s-68.1MiB/s (71.5MB/s-71.5MB/s), io=409MiB (429MB), run=6005-6005msec 00:33:14.640 WRITE: bw=39.6MiB/s (41.6MB/s), 39.6MiB/s-39.6MiB/s (41.6MB/s-41.6MB/s), io=205MiB (215MB), run=5184-5184msec 00:33:14.640 00:33:14.640 Disk stats (read/write): 00:33:14.640 nvme0n1: ios=103411/51694, merge=0/0, ticks=492606/222304, in_queue=714910, util=98.50% 00:33:14.640 06:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:14.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:33:14.640 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:14.640 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:33:14.640 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:14.640 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:14.640 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:14.640 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:14.640 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:33:14.640 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:14.899 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:33:14.899 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:33:14.899 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:33:14.899 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:33:14.899 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:14.899 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:14.899 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:14.900 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:14.900 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:14.900 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:14.900 rmmod nvme_tcp 00:33:14.900 rmmod nvme_fabrics 00:33:14.900 rmmod nvme_keyring 00:33:14.900 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:14.900 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:14.900 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:14.900 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 123434 ']' 00:33:14.900 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 123434 00:33:14.900 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 123434 ']' 00:33:14.900 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 123434 00:33:14.900 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:33:14.900 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:14.900 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 123434 00:33:14.900 killing process with pid 123434 00:33:14.900 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:14.900 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:14.900 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 123434' 00:33:14.900 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 123434 00:33:14.900 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 123434 00:33:15.161 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:15.161 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:15.161 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:15.161 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:15.161 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:15.161 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:15.161 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:15.161 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:15.161 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:33:15.161 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:33:15.161 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:33:15.161 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:33:15.161 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:33:15.161 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:33:15.161 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:33:15.161 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:33:15.161 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:33:15.161 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:33:15.161 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:33:15.161 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:33:15.161 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:15.423 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:15.423 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:33:15.423 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:15.423 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:15.423 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:15.423 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:33:15.423 ************************************ 00:33:15.423 END TEST nvmf_target_multipath 00:33:15.423 ************************************ 00:33:15.423 00:33:15.423 real 0m19.830s 00:33:15.423 user 1m9.103s 00:33:15.423 sys 0m5.978s 00:33:15.423 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:15.423 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:15.423 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:15.423 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:15.423 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:15.423 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:15.423 ************************************ 00:33:15.423 START TEST nvmf_zcopy 00:33:15.423 ************************************ 00:33:15.423 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:15.423 * Looking for test storage... 00:33:15.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:15.423 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:15.423 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:33:15.423 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:15.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.686 --rc genhtml_branch_coverage=1 00:33:15.686 --rc genhtml_function_coverage=1 00:33:15.686 --rc genhtml_legend=1 00:33:15.686 --rc geninfo_all_blocks=1 00:33:15.686 --rc geninfo_unexecuted_blocks=1 00:33:15.686 00:33:15.686 ' 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:15.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.686 --rc genhtml_branch_coverage=1 00:33:15.686 --rc genhtml_function_coverage=1 00:33:15.686 --rc genhtml_legend=1 00:33:15.686 --rc geninfo_all_blocks=1 00:33:15.686 --rc geninfo_unexecuted_blocks=1 00:33:15.686 00:33:15.686 ' 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:15.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.686 --rc genhtml_branch_coverage=1 00:33:15.686 --rc genhtml_function_coverage=1 00:33:15.686 --rc genhtml_legend=1 00:33:15.686 --rc geninfo_all_blocks=1 00:33:15.686 --rc geninfo_unexecuted_blocks=1 00:33:15.686 00:33:15.686 ' 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:15.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.686 --rc genhtml_branch_coverage=1 00:33:15.686 --rc genhtml_function_coverage=1 00:33:15.686 --rc genhtml_legend=1 00:33:15.686 --rc geninfo_all_blocks=1 00:33:15.686 --rc geninfo_unexecuted_blocks=1 00:33:15.686 00:33:15.686 ' 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:15.686 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:15.687 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:15.687 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:15.687 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:15.687 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:33:15.687 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:15.687 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:15.687 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:15.687 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.687 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.687 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.687 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:33:15.687 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.687 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:33:15.687 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:15.687 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:15.687 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:15.687 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:15.687 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:15.687 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:15.687 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:15.687 06:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:33:15.687 Cannot find device "nvmf_init_br" 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:33:15.687 Cannot find device "nvmf_init_br2" 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:33:15.687 Cannot find device "nvmf_tgt_br" 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:33:15.687 Cannot find device "nvmf_tgt_br2" 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:33:15.687 Cannot find device "nvmf_init_br" 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:33:15.687 Cannot find device "nvmf_init_br2" 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:33:15.687 Cannot find device "nvmf_tgt_br" 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:33:15.687 Cannot find device "nvmf_tgt_br2" 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:33:15.687 Cannot find device "nvmf_br" 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:33:15.687 Cannot find device "nvmf_init_if" 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:33:15.687 Cannot find device "nvmf_init_if2" 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:15.687 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:15.687 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:33:15.687 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:15.688 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:33:15.688 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:15.688 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:15.688 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:15.688 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:33:15.949 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:15.949 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:33:15.949 00:33:15.949 --- 10.0.0.3 ping statistics --- 00:33:15.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:15.949 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:33:15.949 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:33:15.949 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:33:15.949 00:33:15.949 --- 10.0.0.4 ping statistics --- 00:33:15.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:15.949 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:15.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:15.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:33:15.949 00:33:15.949 --- 10.0.0.1 ping statistics --- 00:33:15.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:15.949 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:33:15.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:15.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:33:15.949 00:33:15.949 --- 10.0.0.2 ping statistics --- 00:33:15.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:15.949 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=124064 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 124064 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 124064 ']' 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:15.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:15.949 06:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:15.949 [2024-11-27 06:35:35.432769] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:15.949 [2024-11-27 06:35:35.433646] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:33:15.949 [2024-11-27 06:35:35.433696] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:16.210 [2024-11-27 06:35:35.550157] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:16.210 [2024-11-27 06:35:35.581164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:16.210 [2024-11-27 06:35:35.612362] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:16.210 [2024-11-27 06:35:35.612411] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:16.210 [2024-11-27 06:35:35.612419] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:16.210 [2024-11-27 06:35:35.612424] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:16.210 [2024-11-27 06:35:35.612429] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:16.210 [2024-11-27 06:35:35.612713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:16.210 [2024-11-27 06:35:35.685479] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:16.210 [2024-11-27 06:35:35.685758] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:17.154 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:17.154 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:33:17.154 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:17.154 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:17.154 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:17.154 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:17.154 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:33:17.154 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:33:17.154 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.154 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:17.154 [2024-11-27 06:35:36.385378] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:17.154 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.154 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:17.154 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.154 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:17.155 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.155 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:33:17.155 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.155 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:17.155 [2024-11-27 06:35:36.401379] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:33:17.155 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.155 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:33:17.155 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.155 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:17.155 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.155 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:33:17.155 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.155 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:17.155 malloc0 00:33:17.155 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.155 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:33:17.155 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.155 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:17.155 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.155 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:33:17.155 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:33:17.155 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:17.155 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:17.155 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:17.155 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:17.155 { 00:33:17.155 "params": { 00:33:17.155 "name": "Nvme$subsystem", 00:33:17.155 "trtype": "$TEST_TRANSPORT", 00:33:17.155 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:17.155 "adrfam": "ipv4", 00:33:17.155 "trsvcid": "$NVMF_PORT", 00:33:17.155 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:17.155 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:17.155 "hdgst": ${hdgst:-false}, 00:33:17.155 "ddgst": ${ddgst:-false} 00:33:17.155 }, 00:33:17.155 "method": "bdev_nvme_attach_controller" 00:33:17.155 } 00:33:17.155 EOF 00:33:17.155 )") 00:33:17.155 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:17.155 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:17.155 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:17.155 06:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:17.155 "params": { 00:33:17.155 "name": "Nvme1", 00:33:17.155 "trtype": "tcp", 00:33:17.155 "traddr": "10.0.0.3", 00:33:17.155 "adrfam": "ipv4", 00:33:17.155 "trsvcid": "4420", 00:33:17.155 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:17.155 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:17.155 "hdgst": false, 00:33:17.155 "ddgst": false 00:33:17.155 }, 00:33:17.155 "method": "bdev_nvme_attach_controller" 00:33:17.155 }' 00:33:17.155 [2024-11-27 06:35:36.484550] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:33:17.155 [2024-11-27 06:35:36.484625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124115 ] 00:33:17.155 [2024-11-27 06:35:36.600831] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:17.155 [2024-11-27 06:35:36.626536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.155 [2024-11-27 06:35:36.648484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:17.417 Running I/O for 10 seconds... 00:33:19.302 6938.00 IOPS, 54.20 MiB/s [2024-11-27T06:35:40.223Z] 6961.00 IOPS, 54.38 MiB/s [2024-11-27T06:35:40.797Z] 6966.67 IOPS, 54.43 MiB/s [2024-11-27T06:35:42.185Z] 7465.00 IOPS, 58.32 MiB/s [2024-11-27T06:35:43.129Z] 7797.80 IOPS, 60.92 MiB/s [2024-11-27T06:35:44.065Z] 8018.50 IOPS, 62.64 MiB/s [2024-11-27T06:35:45.001Z] 8166.86 IOPS, 63.80 MiB/s [2024-11-27T06:35:45.936Z] 8270.75 IOPS, 64.62 MiB/s [2024-11-27T06:35:46.870Z] 8351.33 IOPS, 65.24 MiB/s [2024-11-27T06:35:46.870Z] 8415.70 IOPS, 65.75 MiB/s 00:33:27.340 Latency(us) 00:33:27.340 [2024-11-27T06:35:46.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:27.340 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:33:27.340 Verification LBA range: start 0x0 length 0x1000 00:33:27.340 Nvme1n1 : 10.01 8418.93 65.77 0.00 0.00 15161.75 2205.54 22383.06 00:33:27.340 [2024-11-27T06:35:46.870Z] =================================================================================================================== 00:33:27.340 [2024-11-27T06:35:46.870Z] Total : 8418.93 65.77 0.00 0.00 15161.75 2205.54 22383.06 00:33:27.599 06:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=124232 00:33:27.599 06:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:33:27.599 06:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:27.599 06:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:33:27.599 06:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:33:27.599 06:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:27.599 06:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:27.599 06:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:27.599 06:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:27.599 { 00:33:27.599 "params": { 00:33:27.599 "name": "Nvme$subsystem", 00:33:27.599 "trtype": "$TEST_TRANSPORT", 00:33:27.599 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:27.599 "adrfam": "ipv4", 00:33:27.599 "trsvcid": "$NVMF_PORT", 00:33:27.599 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:27.599 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:27.599 "hdgst": ${hdgst:-false}, 00:33:27.599 "ddgst": ${ddgst:-false} 00:33:27.599 }, 00:33:27.599 "method": "bdev_nvme_attach_controller" 00:33:27.599 } 00:33:27.599 EOF 00:33:27.599 )") 00:33:27.599 06:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:27.599 06:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:27.599 [2024-11-27 06:35:46.929132] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.599 [2024-11-27 06:35:46.929158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.599 06:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:27.599 06:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:27.599 "params": { 00:33:27.600 "name": "Nvme1", 00:33:27.600 "trtype": "tcp", 00:33:27.600 "traddr": "10.0.0.3", 00:33:27.600 "adrfam": "ipv4", 00:33:27.600 "trsvcid": "4420", 00:33:27.600 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:27.600 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:27.600 "hdgst": false, 00:33:27.600 "ddgst": false 00:33:27.600 }, 00:33:27.600 "method": "bdev_nvme_attach_controller" 00:33:27.600 }' 00:33:27.600 2024/11/27 06:35:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.600 [2024-11-27 06:35:46.937109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.600 [2024-11-27 06:35:46.937124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.600 2024/11/27 06:35:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.600 [2024-11-27 06:35:46.945107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.600 [2024-11-27 06:35:46.945121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.600 2024/11/27 06:35:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.600 [2024-11-27 06:35:46.953106] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.600 [2024-11-27 06:35:46.953120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.600 [2024-11-27 06:35:46.954041] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:33:27.600 [2024-11-27 06:35:46.954086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124232 ] 00:33:27.600 2024/11/27 06:35:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.600 [2024-11-27 06:35:46.961106] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.600 [2024-11-27 06:35:46.961119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.600 2024/11/27 06:35:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.600 [2024-11-27 06:35:46.969106] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.600 [2024-11-27 06:35:46.969120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.600 2024/11/27 06:35:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.600 [2024-11-27 06:35:46.977109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.600 [2024-11-27 06:35:46.977122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.600 2024/11/27 06:35:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.600 [2024-11-27 06:35:46.985108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.600 [2024-11-27 06:35:46.985121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.600 2024/11/27 06:35:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.600 [2024-11-27 06:35:46.993107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.600 [2024-11-27 06:35:46.993121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.600 2024/11/27 06:35:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.600 [2024-11-27 06:35:47.001108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.600 [2024-11-27 06:35:47.001122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.600 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.600 [2024-11-27 06:35:47.009108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.600 [2024-11-27 06:35:47.009121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.600 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.600 [2024-11-27 06:35:47.017107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.600 [2024-11-27 06:35:47.017121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.600 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.600 [2024-11-27 06:35:47.029111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.600 [2024-11-27 06:35:47.029127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.600 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.600 [2024-11-27 06:35:47.037108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.600 [2024-11-27 06:35:47.037123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.600 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.600 [2024-11-27 06:35:47.045108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.600 [2024-11-27 06:35:47.045123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.600 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.600 [2024-11-27 06:35:47.053109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.600 [2024-11-27 06:35:47.053123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.600 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.600 [2024-11-27 06:35:47.061109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.600 [2024-11-27 06:35:47.061122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.600 [2024-11-27 06:35:47.065169] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:27.600 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.600 [2024-11-27 06:35:47.069112] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.600 [2024-11-27 06:35:47.069129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.600 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.601 [2024-11-27 06:35:47.077109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.601 [2024-11-27 06:35:47.077123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.601 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.601 [2024-11-27 06:35:47.085108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.601 [2024-11-27 06:35:47.085122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.601 [2024-11-27 06:35:47.087764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:27.601 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.601 [2024-11-27 06:35:47.093108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.601 [2024-11-27 06:35:47.093121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.601 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.601 [2024-11-27 06:35:47.101108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.601 [2024-11-27 06:35:47.101122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.601 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.601 [2024-11-27 06:35:47.108669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:27.601 [2024-11-27 06:35:47.109108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.601 [2024-11-27 06:35:47.109123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.601 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.601 [2024-11-27 06:35:47.117109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.601 [2024-11-27 06:35:47.117122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.601 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.601 [2024-11-27 06:35:47.125108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.601 [2024-11-27 06:35:47.125121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.860 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.860 [2024-11-27 06:35:47.133109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.860 [2024-11-27 06:35:47.133123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.860 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.860 [2024-11-27 06:35:47.141108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.860 [2024-11-27 06:35:47.141122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.860 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.860 [2024-11-27 06:35:47.149108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.860 [2024-11-27 06:35:47.149122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.860 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.860 [2024-11-27 06:35:47.157107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.860 [2024-11-27 06:35:47.157121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.860 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.860 [2024-11-27 06:35:47.165108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.860 [2024-11-27 06:35:47.165121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.860 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.860 [2024-11-27 06:35:47.173108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.860 [2024-11-27 06:35:47.173122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.860 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.860 [2024-11-27 06:35:47.181108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.860 [2024-11-27 06:35:47.181122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.860 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.860 [2024-11-27 06:35:47.189116] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.860 [2024-11-27 06:35:47.189133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.860 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.860 [2024-11-27 06:35:47.197112] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.860 [2024-11-27 06:35:47.197129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.860 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.860 [2024-11-27 06:35:47.205112] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.860 [2024-11-27 06:35:47.205128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.860 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.860 [2024-11-27 06:35:47.213111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.860 [2024-11-27 06:35:47.213128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.860 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.860 [2024-11-27 06:35:47.221112] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.860 [2024-11-27 06:35:47.221128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.860 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.860 [2024-11-27 06:35:47.229112] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.860 [2024-11-27 06:35:47.229128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.860 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.860 [2024-11-27 06:35:47.237109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.860 [2024-11-27 06:35:47.237125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.860 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.860 [2024-11-27 06:35:47.245116] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.860 [2024-11-27 06:35:47.245134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.860 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.860 [2024-11-27 06:35:47.253111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.860 [2024-11-27 06:35:47.253126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.861 Running I/O for 5 seconds... 00:33:27.861 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.861 [2024-11-27 06:35:47.264514] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.861 [2024-11-27 06:35:47.264538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.861 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.861 [2024-11-27 06:35:47.278401] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.861 [2024-11-27 06:35:47.278425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.861 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.861 [2024-11-27 06:35:47.287913] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.861 [2024-11-27 06:35:47.287934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.861 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.861 [2024-11-27 06:35:47.302177] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.861 [2024-11-27 06:35:47.302200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.861 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.861 [2024-11-27 06:35:47.312384] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.861 [2024-11-27 06:35:47.312408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.861 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.861 [2024-11-27 06:35:47.326597] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.861 [2024-11-27 06:35:47.326619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.861 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.861 [2024-11-27 06:35:47.335911] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.861 [2024-11-27 06:35:47.335931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.861 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.861 [2024-11-27 06:35:47.342328] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.861 [2024-11-27 06:35:47.342348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.861 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.861 [2024-11-27 06:35:47.352003] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.861 [2024-11-27 06:35:47.352024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.861 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.861 [2024-11-27 06:35:47.358337] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.861 [2024-11-27 06:35:47.358357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.861 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.861 [2024-11-27 06:35:47.368034] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.861 [2024-11-27 06:35:47.368055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.861 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:27.861 [2024-11-27 06:35:47.382405] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.861 [2024-11-27 06:35:47.382427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.861 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.120 [2024-11-27 06:35:47.397250] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.120 [2024-11-27 06:35:47.397272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.120 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.120 [2024-11-27 06:35:47.404345] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.120 [2024-11-27 06:35:47.404367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.120 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.120 [2024-11-27 06:35:47.418148] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.120 [2024-11-27 06:35:47.418169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.120 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.120 [2024-11-27 06:35:47.429715] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.120 [2024-11-27 06:35:47.429736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.120 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.120 [2024-11-27 06:35:47.442507] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.120 [2024-11-27 06:35:47.442531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.120 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.120 [2024-11-27 06:35:47.452868] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.120 [2024-11-27 06:35:47.452891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.120 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.120 [2024-11-27 06:35:47.466742] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.120 [2024-11-27 06:35:47.466764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.120 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.120 [2024-11-27 06:35:47.475398] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.120 [2024-11-27 06:35:47.475420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.120 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.120 [2024-11-27 06:35:47.483925] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.120 [2024-11-27 06:35:47.483946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.120 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.120 [2024-11-27 06:35:47.497950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.120 [2024-11-27 06:35:47.497971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.120 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.120 [2024-11-27 06:35:47.509124] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.120 [2024-11-27 06:35:47.509146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.121 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.121 [2024-11-27 06:35:47.515617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.121 [2024-11-27 06:35:47.515638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.121 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.121 [2024-11-27 06:35:47.523542] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.121 [2024-11-27 06:35:47.523562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.121 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.121 [2024-11-27 06:35:47.530763] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.121 [2024-11-27 06:35:47.530784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.121 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.121 [2024-11-27 06:35:47.539368] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.121 [2024-11-27 06:35:47.539391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.121 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.121 [2024-11-27 06:35:47.545769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.121 [2024-11-27 06:35:47.545788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.121 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.121 [2024-11-27 06:35:47.556545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.121 [2024-11-27 06:35:47.556566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.121 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.121 [2024-11-27 06:35:47.562823] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.121 [2024-11-27 06:35:47.562843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.121 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.121 [2024-11-27 06:35:47.571871] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.121 [2024-11-27 06:35:47.571893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.121 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.121 [2024-11-27 06:35:47.578251] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.121 [2024-11-27 06:35:47.578271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.121 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.121 [2024-11-27 06:35:47.588835] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.121 [2024-11-27 06:35:47.588856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.121 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.121 [2024-11-27 06:35:47.602382] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.121 [2024-11-27 06:35:47.602404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.121 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.121 [2024-11-27 06:35:47.613131] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.121 [2024-11-27 06:35:47.613154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.121 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.121 [2024-11-27 06:35:47.619308] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.121 [2024-11-27 06:35:47.619331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.121 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.121 [2024-11-27 06:35:47.629018] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.121 [2024-11-27 06:35:47.629041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.121 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.121 [2024-11-27 06:35:47.635539] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.121 [2024-11-27 06:35:47.635561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.121 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.121 [2024-11-27 06:35:47.644550] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.121 [2024-11-27 06:35:47.644572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.121 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.380 [2024-11-27 06:35:47.658655] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.380 [2024-11-27 06:35:47.658679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.380 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.380 [2024-11-27 06:35:47.668708] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.380 [2024-11-27 06:35:47.668730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.380 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.380 [2024-11-27 06:35:47.682213] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.380 [2024-11-27 06:35:47.682236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.380 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.380 [2024-11-27 06:35:47.694199] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.380 [2024-11-27 06:35:47.694222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.380 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.380 [2024-11-27 06:35:47.705969] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.380 [2024-11-27 06:35:47.705997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.380 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.380 [2024-11-27 06:35:47.718740] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.380 [2024-11-27 06:35:47.718763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.380 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.380 [2024-11-27 06:35:47.729294] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.380 [2024-11-27 06:35:47.729328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.380 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.380 [2024-11-27 06:35:47.735553] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.380 [2024-11-27 06:35:47.735574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.380 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.380 [2024-11-27 06:35:47.744526] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.380 [2024-11-27 06:35:47.744547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.380 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.380 [2024-11-27 06:35:47.758700] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.380 [2024-11-27 06:35:47.758723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.381 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.381 [2024-11-27 06:35:47.772425] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.381 [2024-11-27 06:35:47.772448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.381 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.381 [2024-11-27 06:35:47.786289] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.381 [2024-11-27 06:35:47.786311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.381 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.381 [2024-11-27 06:35:47.795747] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.381 [2024-11-27 06:35:47.795769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.381 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.381 [2024-11-27 06:35:47.802242] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.381 [2024-11-27 06:35:47.802262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.381 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.381 [2024-11-27 06:35:47.812567] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.381 [2024-11-27 06:35:47.812588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.381 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.381 [2024-11-27 06:35:47.826541] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.381 [2024-11-27 06:35:47.826564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.381 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.381 [2024-11-27 06:35:47.841173] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.381 [2024-11-27 06:35:47.841196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.381 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.381 [2024-11-27 06:35:47.854171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.381 [2024-11-27 06:35:47.854193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.381 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.381 [2024-11-27 06:35:47.866000] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.381 [2024-11-27 06:35:47.866021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.381 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.381 [2024-11-27 06:35:47.878735] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.381 [2024-11-27 06:35:47.878758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.381 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.381 [2024-11-27 06:35:47.888702] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.381 [2024-11-27 06:35:47.888724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.381 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.381 [2024-11-27 06:35:47.902737] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.381 [2024-11-27 06:35:47.902759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.381 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.640 [2024-11-27 06:35:47.916827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.640 [2024-11-27 06:35:47.916850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.640 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.640 [2024-11-27 06:35:47.929782] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.640 [2024-11-27 06:35:47.929804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.640 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.640 [2024-11-27 06:35:47.942440] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.640 [2024-11-27 06:35:47.942463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.640 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.640 [2024-11-27 06:35:47.953080] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.640 [2024-11-27 06:35:47.953102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.640 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.640 [2024-11-27 06:35:47.959301] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.640 [2024-11-27 06:35:47.959324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.640 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.640 [2024-11-27 06:35:47.968682] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.640 [2024-11-27 06:35:47.968705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.640 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.640 [2024-11-27 06:35:47.982945] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.640 [2024-11-27 06:35:47.982967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.640 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.640 [2024-11-27 06:35:47.992394] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.640 [2024-11-27 06:35:47.992417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.640 2024/11/27 06:35:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.640 [2024-11-27 06:35:48.006567] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.640 [2024-11-27 06:35:48.006589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.640 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.640 [2024-11-27 06:35:48.017331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.640 [2024-11-27 06:35:48.017351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.641 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.641 [2024-11-27 06:35:48.023689] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.641 [2024-11-27 06:35:48.023710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.641 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.641 [2024-11-27 06:35:48.033046] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.641 [2024-11-27 06:35:48.033071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.641 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.641 [2024-11-27 06:35:48.039205] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.641 [2024-11-27 06:35:48.039227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.641 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.641 [2024-11-27 06:35:48.048816] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.641 [2024-11-27 06:35:48.048842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.641 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.641 [2024-11-27 06:35:48.062250] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.641 [2024-11-27 06:35:48.062273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.641 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.641 [2024-11-27 06:35:48.073215] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.641 [2024-11-27 06:35:48.073237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.641 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.641 [2024-11-27 06:35:48.079668] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.641 [2024-11-27 06:35:48.079691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.641 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.641 [2024-11-27 06:35:48.088698] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.641 [2024-11-27 06:35:48.088721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.641 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.641 [2024-11-27 06:35:48.102400] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.641 [2024-11-27 06:35:48.102421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.641 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.641 [2024-11-27 06:35:48.112582] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.641 [2024-11-27 06:35:48.112605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.641 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.641 [2024-11-27 06:35:48.126538] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.641 [2024-11-27 06:35:48.126560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.641 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.641 [2024-11-27 06:35:48.137133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.641 [2024-11-27 06:35:48.137156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.641 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.641 [2024-11-27 06:35:48.143352] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.641 [2024-11-27 06:35:48.143374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.641 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.641 [2024-11-27 06:35:48.151213] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.641 [2024-11-27 06:35:48.151234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.641 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.641 [2024-11-27 06:35:48.160690] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.641 [2024-11-27 06:35:48.160713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.641 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.900 [2024-11-27 06:35:48.174743] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.900 [2024-11-27 06:35:48.174766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.900 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.900 [2024-11-27 06:35:48.183943] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.900 [2024-11-27 06:35:48.183965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.900 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.900 [2024-11-27 06:35:48.190296] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.900 [2024-11-27 06:35:48.190318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.900 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.900 [2024-11-27 06:35:48.200662] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.900 [2024-11-27 06:35:48.200685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.900 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.900 [2024-11-27 06:35:48.207112] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.900 [2024-11-27 06:35:48.207133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.900 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.900 [2024-11-27 06:35:48.216063] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.900 [2024-11-27 06:35:48.216084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.900 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.900 [2024-11-27 06:35:48.230130] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.900 [2024-11-27 06:35:48.230152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.900 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.900 [2024-11-27 06:35:48.240566] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.900 [2024-11-27 06:35:48.240588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.900 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.900 [2024-11-27 06:35:48.254711] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.900 [2024-11-27 06:35:48.254733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.900 17580.00 IOPS, 137.34 MiB/s [2024-11-27T06:35:48.430Z] 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.901 [2024-11-27 06:35:48.264814] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.901 [2024-11-27 06:35:48.264837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.901 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.901 [2024-11-27 06:35:48.278649] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.901 [2024-11-27 06:35:48.278673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.901 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.901 [2024-11-27 06:35:48.292430] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.901 [2024-11-27 06:35:48.292452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.901 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.901 [2024-11-27 06:35:48.306327] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.901 [2024-11-27 06:35:48.306350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.901 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.901 [2024-11-27 06:35:48.318403] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.901 [2024-11-27 06:35:48.318425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.901 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.901 [2024-11-27 06:35:48.329125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.901 [2024-11-27 06:35:48.329147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.901 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.901 [2024-11-27 06:35:48.335288] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.901 [2024-11-27 06:35:48.335311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.901 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.901 [2024-11-27 06:35:48.344178] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.901 [2024-11-27 06:35:48.344198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.901 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.901 [2024-11-27 06:35:48.358569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.901 [2024-11-27 06:35:48.358592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.901 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.901 [2024-11-27 06:35:48.373162] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.901 [2024-11-27 06:35:48.373184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.901 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.901 [2024-11-27 06:35:48.380190] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.901 [2024-11-27 06:35:48.380212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.901 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.901 [2024-11-27 06:35:48.393997] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.901 [2024-11-27 06:35:48.394019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.901 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.901 [2024-11-27 06:35:48.406159] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.901 [2024-11-27 06:35:48.406181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.901 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:28.901 [2024-11-27 06:35:48.418499] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.901 [2024-11-27 06:35:48.418522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.901 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.161 [2024-11-27 06:35:48.429419] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.161 [2024-11-27 06:35:48.429441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.161 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.161 [2024-11-27 06:35:48.442657] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.161 [2024-11-27 06:35:48.442680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.161 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.161 [2024-11-27 06:35:48.452004] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.161 [2024-11-27 06:35:48.452026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.161 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.161 [2024-11-27 06:35:48.466131] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.161 [2024-11-27 06:35:48.466153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.161 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.161 [2024-11-27 06:35:48.476540] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.161 [2024-11-27 06:35:48.476562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.161 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.161 [2024-11-27 06:35:48.490351] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.161 [2024-11-27 06:35:48.490374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.161 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.161 [2024-11-27 06:35:48.500791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.161 [2024-11-27 06:35:48.500813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.161 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.161 [2024-11-27 06:35:48.514806] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.161 [2024-11-27 06:35:48.514828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.161 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.161 [2024-11-27 06:35:48.529099] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.161 [2024-11-27 06:35:48.529121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.161 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.161 [2024-11-27 06:35:48.536240] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.161 [2024-11-27 06:35:48.536263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.161 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.161 [2024-11-27 06:35:48.550338] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.161 [2024-11-27 06:35:48.550360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.161 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.161 [2024-11-27 06:35:48.561837] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.161 [2024-11-27 06:35:48.561859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.161 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.161 [2024-11-27 06:35:48.574485] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.161 [2024-11-27 06:35:48.574508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.161 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.161 [2024-11-27 06:35:48.586341] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.161 [2024-11-27 06:35:48.586363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.161 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.161 [2024-11-27 06:35:48.598354] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.161 [2024-11-27 06:35:48.598375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.161 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.161 [2024-11-27 06:35:48.609042] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.161 [2024-11-27 06:35:48.609069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.161 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.161 [2024-11-27 06:35:48.615551] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.161 [2024-11-27 06:35:48.615572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.161 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.161 [2024-11-27 06:35:48.625232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.161 [2024-11-27 06:35:48.625255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.161 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.161 [2024-11-27 06:35:48.631885] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.161 [2024-11-27 06:35:48.631915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.161 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.161 [2024-11-27 06:35:48.640590] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.162 [2024-11-27 06:35:48.640611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.162 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.162 [2024-11-27 06:35:48.654943] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.162 [2024-11-27 06:35:48.654964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.162 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.162 [2024-11-27 06:35:48.665489] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.162 [2024-11-27 06:35:48.665508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.162 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.162 [2024-11-27 06:35:48.678960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.162 [2024-11-27 06:35:48.678982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.162 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.421 [2024-11-27 06:35:48.692879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.421 [2024-11-27 06:35:48.692912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.421 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.421 [2024-11-27 06:35:48.705724] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.421 [2024-11-27 06:35:48.705747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.421 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.421 [2024-11-27 06:35:48.717599] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.421 [2024-11-27 06:35:48.717621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.421 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.421 [2024-11-27 06:35:48.730463] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.421 [2024-11-27 06:35:48.730485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.421 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.421 [2024-11-27 06:35:48.741306] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.421 [2024-11-27 06:35:48.741328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.421 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.421 [2024-11-27 06:35:48.747519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.421 [2024-11-27 06:35:48.747542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.422 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.422 [2024-11-27 06:35:48.755340] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.422 [2024-11-27 06:35:48.755362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.422 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.422 [2024-11-27 06:35:48.764749] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.422 [2024-11-27 06:35:48.764771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.422 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.422 [2024-11-27 06:35:48.778860] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.422 [2024-11-27 06:35:48.778883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.422 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.422 [2024-11-27 06:35:48.788302] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.422 [2024-11-27 06:35:48.788324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.422 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.422 [2024-11-27 06:35:48.802415] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.422 [2024-11-27 06:35:48.802437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.422 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.422 [2024-11-27 06:35:48.813323] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.422 [2024-11-27 06:35:48.813346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.422 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.422 [2024-11-27 06:35:48.826578] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.422 [2024-11-27 06:35:48.826600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.422 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.422 [2024-11-27 06:35:48.835956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.422 [2024-11-27 06:35:48.835978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.422 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.422 [2024-11-27 06:35:48.849992] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.422 [2024-11-27 06:35:48.850019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.422 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.422 [2024-11-27 06:35:48.859229] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.422 [2024-11-27 06:35:48.859252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.422 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.422 [2024-11-27 06:35:48.868840] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.422 [2024-11-27 06:35:48.868861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.422 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.422 [2024-11-27 06:35:48.882821] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.422 [2024-11-27 06:35:48.882844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.422 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.422 [2024-11-27 06:35:48.892133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.422 [2024-11-27 06:35:48.892155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.422 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.422 [2024-11-27 06:35:48.906158] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.422 [2024-11-27 06:35:48.906180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.422 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.422 [2024-11-27 06:35:48.918336] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.422 [2024-11-27 06:35:48.918358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.422 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.422 [2024-11-27 06:35:48.929006] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.422 [2024-11-27 06:35:48.929028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.422 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.422 [2024-11-27 06:35:48.935312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.422 [2024-11-27 06:35:48.935334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.422 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.422 [2024-11-27 06:35:48.944426] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.422 [2024-11-27 06:35:48.944447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.422 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.682 [2024-11-27 06:35:48.958775] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.682 [2024-11-27 06:35:48.958798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.682 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.682 [2024-11-27 06:35:48.968874] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.682 [2024-11-27 06:35:48.968897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.682 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.682 [2024-11-27 06:35:48.982861] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.682 [2024-11-27 06:35:48.982885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.682 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.682 [2024-11-27 06:35:48.992356] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.682 [2024-11-27 06:35:48.992379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.682 2024/11/27 06:35:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.682 [2024-11-27 06:35:49.006126] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.682 [2024-11-27 06:35:49.006148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.682 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.682 [2024-11-27 06:35:49.016639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.682 [2024-11-27 06:35:49.016662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.682 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.682 [2024-11-27 06:35:49.030509] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.682 [2024-11-27 06:35:49.030531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.682 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.682 [2024-11-27 06:35:49.041052] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.682 [2024-11-27 06:35:49.041075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.682 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.682 [2024-11-27 06:35:49.047381] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.682 [2024-11-27 06:35:49.047403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.682 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.682 [2024-11-27 06:35:49.057160] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.682 [2024-11-27 06:35:49.057182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.682 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.682 [2024-11-27 06:35:49.063678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.682 [2024-11-27 06:35:49.063700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.682 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.682 [2024-11-27 06:35:49.071511] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.682 [2024-11-27 06:35:49.071532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.682 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.682 [2024-11-27 06:35:49.078840] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.682 [2024-11-27 06:35:49.078861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.682 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.682 [2024-11-27 06:35:49.088526] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.682 [2024-11-27 06:35:49.088547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.682 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.682 [2024-11-27 06:35:49.102819] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.682 [2024-11-27 06:35:49.102843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.682 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.682 [2024-11-27 06:35:49.111261] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.682 [2024-11-27 06:35:49.111284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.682 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.682 [2024-11-27 06:35:49.120729] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.682 [2024-11-27 06:35:49.120752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.682 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.682 [2024-11-27 06:35:49.134857] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.682 [2024-11-27 06:35:49.134880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.682 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.682 [2024-11-27 06:35:49.148740] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.683 [2024-11-27 06:35:49.148763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.683 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.683 [2024-11-27 06:35:49.162392] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.683 [2024-11-27 06:35:49.162415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.683 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.683 [2024-11-27 06:35:49.173047] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.683 [2024-11-27 06:35:49.173074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.683 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.683 [2024-11-27 06:35:49.179520] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.683 [2024-11-27 06:35:49.179543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.683 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.683 [2024-11-27 06:35:49.187478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.683 [2024-11-27 06:35:49.187501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.683 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.683 [2024-11-27 06:35:49.201401] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.683 [2024-11-27 06:35:49.201422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.683 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.942 [2024-11-27 06:35:49.212113] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.942 [2024-11-27 06:35:49.212135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.942 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.942 [2024-11-27 06:35:49.226300] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.942 [2024-11-27 06:35:49.226323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.942 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.942 [2024-11-27 06:35:49.237022] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.942 [2024-11-27 06:35:49.237044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.942 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.942 [2024-11-27 06:35:49.243368] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.942 [2024-11-27 06:35:49.243391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.942 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.942 [2024-11-27 06:35:49.252996] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.942 [2024-11-27 06:35:49.253017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.942 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.942 17608.00 IOPS, 137.56 MiB/s [2024-11-27T06:35:49.472Z] [2024-11-27 06:35:49.266491] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.942 [2024-11-27 06:35:49.266514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.942 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.942 [2024-11-27 06:35:49.274929] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.942 [2024-11-27 06:35:49.274951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.942 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.942 [2024-11-27 06:35:49.285321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.942 [2024-11-27 06:35:49.285342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.942 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.942 [2024-11-27 06:35:49.298935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.942 [2024-11-27 06:35:49.298958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.942 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.942 [2024-11-27 06:35:49.307303] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.942 [2024-11-27 06:35:49.307325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.943 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.943 [2024-11-27 06:35:49.313749] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.943 [2024-11-27 06:35:49.313770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.943 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.943 [2024-11-27 06:35:49.324073] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.943 [2024-11-27 06:35:49.324095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.943 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.943 [2024-11-27 06:35:49.338316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.943 [2024-11-27 06:35:49.338339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.943 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.943 [2024-11-27 06:35:49.347869] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.943 [2024-11-27 06:35:49.347891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.943 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.943 [2024-11-27 06:35:49.362035] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.943 [2024-11-27 06:35:49.362056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.943 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.943 [2024-11-27 06:35:49.372776] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.943 [2024-11-27 06:35:49.372799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.943 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.943 [2024-11-27 06:35:49.386868] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.943 [2024-11-27 06:35:49.386891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.943 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.943 [2024-11-27 06:35:49.397076] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.943 [2024-11-27 06:35:49.397097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.943 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.943 [2024-11-27 06:35:49.403444] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.943 [2024-11-27 06:35:49.403467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.943 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.943 [2024-11-27 06:35:49.413144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.943 [2024-11-27 06:35:49.413165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.943 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.943 [2024-11-27 06:35:49.419567] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.943 [2024-11-27 06:35:49.419588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.943 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.943 [2024-11-27 06:35:49.428603] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.943 [2024-11-27 06:35:49.428625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.943 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.943 [2024-11-27 06:35:49.442920] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.943 [2024-11-27 06:35:49.442942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.943 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.943 [2024-11-27 06:35:49.456638] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.943 [2024-11-27 06:35:49.456661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.943 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:29.943 [2024-11-27 06:35:49.470386] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.943 [2024-11-27 06:35:49.470408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.202 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.202 [2024-11-27 06:35:49.480421] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.202 [2024-11-27 06:35:49.480443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.202 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.202 [2024-11-27 06:35:49.494466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.202 [2024-11-27 06:35:49.494488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.202 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.202 [2024-11-27 06:35:49.503688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.202 [2024-11-27 06:35:49.503710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.202 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.202 [2024-11-27 06:35:49.510151] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.202 [2024-11-27 06:35:49.510171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.202 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.202 [2024-11-27 06:35:49.520039] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.202 [2024-11-27 06:35:49.520064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.203 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.203 [2024-11-27 06:35:49.534265] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.203 [2024-11-27 06:35:49.534288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.203 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.203 [2024-11-27 06:35:49.545308] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.203 [2024-11-27 06:35:49.545333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.203 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.203 [2024-11-27 06:35:49.558894] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.203 [2024-11-27 06:35:49.558925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.203 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.203 [2024-11-27 06:35:49.573480] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.203 [2024-11-27 06:35:49.573502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.203 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.203 [2024-11-27 06:35:49.586081] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.203 [2024-11-27 06:35:49.586100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.203 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.203 [2024-11-27 06:35:49.598726] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.203 [2024-11-27 06:35:49.598750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.203 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.203 [2024-11-27 06:35:49.609409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.203 [2024-11-27 06:35:49.609432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.203 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.203 [2024-11-27 06:35:49.622830] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.203 [2024-11-27 06:35:49.622852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.203 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.203 [2024-11-27 06:35:49.632142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.203 [2024-11-27 06:35:49.632164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.203 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.203 [2024-11-27 06:35:49.645987] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.203 [2024-11-27 06:35:49.646008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.203 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.203 [2024-11-27 06:35:49.657610] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.203 [2024-11-27 06:35:49.657632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.203 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.203 [2024-11-27 06:35:49.670824] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.203 [2024-11-27 06:35:49.670848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.203 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.203 [2024-11-27 06:35:49.681241] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.203 [2024-11-27 06:35:49.681264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.203 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.203 [2024-11-27 06:35:49.687569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.203 [2024-11-27 06:35:49.687593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.203 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.203 [2024-11-27 06:35:49.696896] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.203 [2024-11-27 06:35:49.696927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.203 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.203 [2024-11-27 06:35:49.710701] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.203 [2024-11-27 06:35:49.710724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.203 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.203 [2024-11-27 06:35:49.720092] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.203 [2024-11-27 06:35:49.720114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.203 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.462 [2024-11-27 06:35:49.733850] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.462 [2024-11-27 06:35:49.733872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.462 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.462 [2024-11-27 06:35:49.744768] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.462 [2024-11-27 06:35:49.744791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.463 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.463 [2024-11-27 06:35:49.758990] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.463 [2024-11-27 06:35:49.759013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.463 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.463 [2024-11-27 06:35:49.772960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.463 [2024-11-27 06:35:49.773004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.463 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.463 [2024-11-27 06:35:49.785931] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.463 [2024-11-27 06:35:49.785953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.463 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.463 [2024-11-27 06:35:49.798866] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.463 [2024-11-27 06:35:49.798889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.463 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.463 [2024-11-27 06:35:49.807992] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.463 [2024-11-27 06:35:49.808014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.463 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.463 [2024-11-27 06:35:49.822000] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.463 [2024-11-27 06:35:49.822022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.463 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.463 [2024-11-27 06:35:49.832545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.463 [2024-11-27 06:35:49.832569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.463 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.463 [2024-11-27 06:35:49.846585] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.463 [2024-11-27 06:35:49.846608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.463 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.463 [2024-11-27 06:35:49.857238] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.463 [2024-11-27 06:35:49.857260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.463 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.463 [2024-11-27 06:35:49.863549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.463 [2024-11-27 06:35:49.863571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.463 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.463 [2024-11-27 06:35:49.872410] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.463 [2024-11-27 06:35:49.872432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.463 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.463 [2024-11-27 06:35:49.878808] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.463 [2024-11-27 06:35:49.878831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.463 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.463 [2024-11-27 06:35:49.888434] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.463 [2024-11-27 06:35:49.888454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.463 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.463 [2024-11-27 06:35:49.902568] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.463 [2024-11-27 06:35:49.902591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.463 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.463 [2024-11-27 06:35:49.911890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.463 [2024-11-27 06:35:49.911921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.463 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.463 [2024-11-27 06:35:49.925711] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.463 [2024-11-27 06:35:49.925733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.463 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.463 [2024-11-27 06:35:49.937512] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.463 [2024-11-27 06:35:49.937535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.463 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.463 [2024-11-27 06:35:49.951098] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.463 [2024-11-27 06:35:49.951119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.463 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.463 [2024-11-27 06:35:49.960640] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.463 [2024-11-27 06:35:49.960662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.463 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.463 [2024-11-27 06:35:49.974651] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.463 [2024-11-27 06:35:49.974673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.463 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.463 [2024-11-27 06:35:49.988755] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.463 [2024-11-27 06:35:49.988778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.722 2024/11/27 06:35:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.722 [2024-11-27 06:35:50.002769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.722 [2024-11-27 06:35:50.002791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.722 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.722 [2024-11-27 06:35:50.012248] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.723 [2024-11-27 06:35:50.012270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.723 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.723 [2024-11-27 06:35:50.026951] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.723 [2024-11-27 06:35:50.026976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.723 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.723 [2024-11-27 06:35:50.036962] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.723 [2024-11-27 06:35:50.036986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.723 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.723 [2024-11-27 06:35:50.050601] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.723 [2024-11-27 06:35:50.050623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.723 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.723 [2024-11-27 06:35:50.059207] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.723 [2024-11-27 06:35:50.059229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.723 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.723 [2024-11-27 06:35:50.065916] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.723 [2024-11-27 06:35:50.065937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.723 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.723 [2024-11-27 06:35:50.076334] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.723 [2024-11-27 06:35:50.076356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.723 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.723 [2024-11-27 06:35:50.090448] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.723 [2024-11-27 06:35:50.090470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.723 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.723 [2024-11-27 06:35:50.100383] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.723 [2024-11-27 06:35:50.100405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.723 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.723 [2024-11-27 06:35:50.114598] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.723 [2024-11-27 06:35:50.114622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.723 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.723 [2024-11-27 06:35:50.124751] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.723 [2024-11-27 06:35:50.124773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.723 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.723 [2024-11-27 06:35:50.138679] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.723 [2024-11-27 06:35:50.138702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.723 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.723 [2024-11-27 06:35:50.152606] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.723 [2024-11-27 06:35:50.152628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.723 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.723 [2024-11-27 06:35:50.166726] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.723 [2024-11-27 06:35:50.166749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.723 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.723 [2024-11-27 06:35:50.180572] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.723 [2024-11-27 06:35:50.180596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.723 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.723 [2024-11-27 06:35:50.194712] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.723 [2024-11-27 06:35:50.194735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.723 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.723 [2024-11-27 06:35:50.208319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.723 [2024-11-27 06:35:50.208343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.723 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.723 [2024-11-27 06:35:50.222088] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.723 [2024-11-27 06:35:50.222110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.723 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.723 [2024-11-27 06:35:50.232623] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.723 [2024-11-27 06:35:50.232645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.723 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.723 [2024-11-27 06:35:50.246585] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.723 [2024-11-27 06:35:50.246608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.723 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.983 [2024-11-27 06:35:50.255885] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.983 [2024-11-27 06:35:50.255918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.983 17601.67 IOPS, 137.51 MiB/s [2024-11-27T06:35:50.513Z] 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.983 [2024-11-27 06:35:50.262007] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.983 [2024-11-27 06:35:50.262029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.983 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.983 [2024-11-27 06:35:50.272214] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.983 [2024-11-27 06:35:50.272237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.983 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.983 [2024-11-27 06:35:50.286335] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.983 [2024-11-27 06:35:50.286358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.983 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.983 [2024-11-27 06:35:50.297339] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.983 [2024-11-27 06:35:50.297360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.983 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.983 [2024-11-27 06:35:50.310802] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.983 [2024-11-27 06:35:50.310824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.983 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.983 [2024-11-27 06:35:50.319370] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.983 [2024-11-27 06:35:50.319391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.983 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.983 [2024-11-27 06:35:50.328813] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.983 [2024-11-27 06:35:50.328835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.983 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.983 [2024-11-27 06:35:50.342807] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.983 [2024-11-27 06:35:50.342830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.983 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.983 [2024-11-27 06:35:50.352469] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.983 [2024-11-27 06:35:50.352492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.983 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.983 [2024-11-27 06:35:50.358872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.983 [2024-11-27 06:35:50.358894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.983 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.983 [2024-11-27 06:35:50.368528] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.983 [2024-11-27 06:35:50.368550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.983 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.983 [2024-11-27 06:35:50.375018] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.983 [2024-11-27 06:35:50.375038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.983 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.983 [2024-11-27 06:35:50.384723] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.983 [2024-11-27 06:35:50.384744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.983 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.983 [2024-11-27 06:35:50.398869] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.984 [2024-11-27 06:35:50.398892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.984 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.984 [2024-11-27 06:35:50.408415] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.984 [2024-11-27 06:35:50.408437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.984 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.984 [2024-11-27 06:35:50.422440] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.984 [2024-11-27 06:35:50.422463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.984 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.984 [2024-11-27 06:35:50.432503] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.984 [2024-11-27 06:35:50.432526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.984 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.984 [2024-11-27 06:35:50.446501] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.984 [2024-11-27 06:35:50.446524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.984 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.984 [2024-11-27 06:35:50.455972] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.984 [2024-11-27 06:35:50.455995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.984 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.984 [2024-11-27 06:35:50.469929] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.984 [2024-11-27 06:35:50.469950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.984 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.984 [2024-11-27 06:35:50.481032] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.984 [2024-11-27 06:35:50.481059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.984 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.984 [2024-11-27 06:35:50.487434] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.984 [2024-11-27 06:35:50.487456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.984 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.984 [2024-11-27 06:35:50.497232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.984 [2024-11-27 06:35:50.497254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.984 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:30.984 [2024-11-27 06:35:50.503481] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.984 [2024-11-27 06:35:50.503504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.984 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.244 [2024-11-27 06:35:50.512523] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.244 [2024-11-27 06:35:50.512546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.244 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.244 [2024-11-27 06:35:50.526463] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.244 [2024-11-27 06:35:50.526486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.244 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.244 [2024-11-27 06:35:50.536828] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.244 [2024-11-27 06:35:50.536851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.244 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.244 [2024-11-27 06:35:50.550861] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.244 [2024-11-27 06:35:50.550884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.244 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.244 [2024-11-27 06:35:50.559394] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.244 [2024-11-27 06:35:50.559416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.244 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.244 [2024-11-27 06:35:50.565741] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.244 [2024-11-27 06:35:50.565765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.244 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.244 [2024-11-27 06:35:50.576455] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.244 [2024-11-27 06:35:50.576478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.244 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.244 [2024-11-27 06:35:50.590516] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.244 [2024-11-27 06:35:50.590539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.244 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.244 [2024-11-27 06:35:50.600087] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.244 [2024-11-27 06:35:50.600109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.244 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.244 [2024-11-27 06:35:50.614283] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.244 [2024-11-27 06:35:50.614306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.244 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.244 [2024-11-27 06:35:50.624578] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.244 [2024-11-27 06:35:50.624601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.244 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.244 [2024-11-27 06:35:50.638776] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.244 [2024-11-27 06:35:50.638798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.244 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.244 [2024-11-27 06:35:50.648359] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.244 [2024-11-27 06:35:50.648382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.244 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.244 [2024-11-27 06:35:50.662473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.244 [2024-11-27 06:35:50.662496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.244 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.244 [2024-11-27 06:35:50.673838] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.244 [2024-11-27 06:35:50.673861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.244 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.244 [2024-11-27 06:35:50.686613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.244 [2024-11-27 06:35:50.686636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.244 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.244 [2024-11-27 06:35:50.697257] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.244 [2024-11-27 06:35:50.697280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.245 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.245 [2024-11-27 06:35:50.703580] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.245 [2024-11-27 06:35:50.703602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.245 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.245 [2024-11-27 06:35:50.713322] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.245 [2024-11-27 06:35:50.713343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.245 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.245 [2024-11-27 06:35:50.720242] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.245 [2024-11-27 06:35:50.720263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.245 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.245 [2024-11-27 06:35:50.733999] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.245 [2024-11-27 06:35:50.734021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.245 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.245 [2024-11-27 06:35:50.745350] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.245 [2024-11-27 06:35:50.745372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.245 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.245 [2024-11-27 06:35:50.758780] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.245 [2024-11-27 06:35:50.758803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.245 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.245 [2024-11-27 06:35:50.769386] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.245 [2024-11-27 06:35:50.769408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.504 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.504 [2024-11-27 06:35:50.782534] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.504 [2024-11-27 06:35:50.782555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.504 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.504 [2024-11-27 06:35:50.791111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.504 [2024-11-27 06:35:50.791132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.504 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.504 [2024-11-27 06:35:50.800013] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.504 [2024-11-27 06:35:50.800036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.504 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.504 [2024-11-27 06:35:50.814087] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.504 [2024-11-27 06:35:50.814109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.504 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.504 [2024-11-27 06:35:50.824770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.504 [2024-11-27 06:35:50.824793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.504 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.504 [2024-11-27 06:35:50.838735] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.504 [2024-11-27 06:35:50.838758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.504 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.504 [2024-11-27 06:35:50.852606] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.504 [2024-11-27 06:35:50.852629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.504 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.504 [2024-11-27 06:35:50.866511] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.504 [2024-11-27 06:35:50.866533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.504 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.504 [2024-11-27 06:35:50.875182] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.504 [2024-11-27 06:35:50.875205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.504 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.504 [2024-11-27 06:35:50.885446] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.504 [2024-11-27 06:35:50.885467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.504 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.504 [2024-11-27 06:35:50.898817] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.504 [2024-11-27 06:35:50.898839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.504 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.504 [2024-11-27 06:35:50.908030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.504 [2024-11-27 06:35:50.908052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.504 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.504 [2024-11-27 06:35:50.922096] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.505 [2024-11-27 06:35:50.922117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.505 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.505 [2024-11-27 06:35:50.932659] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.505 [2024-11-27 06:35:50.932681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.505 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.505 [2024-11-27 06:35:50.946558] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.505 [2024-11-27 06:35:50.946581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.505 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.505 [2024-11-27 06:35:50.960847] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.505 [2024-11-27 06:35:50.960869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.505 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.505 [2024-11-27 06:35:50.974728] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.505 [2024-11-27 06:35:50.974751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.505 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.505 [2024-11-27 06:35:50.984212] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.505 [2024-11-27 06:35:50.984234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.505 2024/11/27 06:35:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.505 [2024-11-27 06:35:50.998200] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.505 [2024-11-27 06:35:50.998222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.505 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.505 [2024-11-27 06:35:51.008889] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.505 [2024-11-27 06:35:51.008920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.505 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.505 [2024-11-27 06:35:51.022751] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.505 [2024-11-27 06:35:51.022773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.505 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.505 [2024-11-27 06:35:51.032261] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.505 [2024-11-27 06:35:51.032283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.780 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.780 [2024-11-27 06:35:51.046228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.780 [2024-11-27 06:35:51.046250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.780 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.780 [2024-11-27 06:35:51.057155] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.780 [2024-11-27 06:35:51.057178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.780 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.780 [2024-11-27 06:35:51.070465] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.780 [2024-11-27 06:35:51.070487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.780 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.780 [2024-11-27 06:35:51.085155] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.780 [2024-11-27 06:35:51.085177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.780 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.780 [2024-11-27 06:35:51.092336] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.780 [2024-11-27 06:35:51.092359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.780 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.780 [2024-11-27 06:35:51.105927] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.780 [2024-11-27 06:35:51.105948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.780 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.780 [2024-11-27 06:35:51.118652] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.780 [2024-11-27 06:35:51.118674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.780 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.780 [2024-11-27 06:35:51.129424] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.780 [2024-11-27 06:35:51.129446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.780 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.780 [2024-11-27 06:35:51.142599] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.780 [2024-11-27 06:35:51.142622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.780 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.780 [2024-11-27 06:35:51.153068] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.780 [2024-11-27 06:35:51.153090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.780 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.780 [2024-11-27 06:35:51.159286] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.780 [2024-11-27 06:35:51.159307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.780 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.780 [2024-11-27 06:35:51.168799] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.780 [2024-11-27 06:35:51.168820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.780 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.780 [2024-11-27 06:35:51.182324] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.780 [2024-11-27 06:35:51.182347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.781 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.781 [2024-11-27 06:35:51.193136] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.781 [2024-11-27 06:35:51.193158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.781 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.781 [2024-11-27 06:35:51.199444] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.781 [2024-11-27 06:35:51.199466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.781 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.781 [2024-11-27 06:35:51.209073] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.781 [2024-11-27 06:35:51.209096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.781 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.781 [2024-11-27 06:35:51.215464] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.781 [2024-11-27 06:35:51.215487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.781 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.781 [2024-11-27 06:35:51.224516] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.781 [2024-11-27 06:35:51.224539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.781 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.781 [2024-11-27 06:35:51.238791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.781 [2024-11-27 06:35:51.238814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.781 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.781 [2024-11-27 06:35:51.249281] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.781 [2024-11-27 06:35:51.249309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.781 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.781 [2024-11-27 06:35:51.255420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.781 [2024-11-27 06:35:51.255443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.781 17615.75 IOPS, 137.62 MiB/s [2024-11-27T06:35:51.311Z] 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.781 [2024-11-27 06:35:51.263134] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.781 [2024-11-27 06:35:51.263156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.781 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.781 [2024-11-27 06:35:51.272777] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.781 [2024-11-27 06:35:51.272800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.781 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.781 [2024-11-27 06:35:51.286557] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.781 [2024-11-27 06:35:51.286581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.781 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:31.781 [2024-11-27 06:35:51.295812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.781 [2024-11-27 06:35:51.295836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.781 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.061 [2024-11-27 06:35:51.309626] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.061 [2024-11-27 06:35:51.309649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.061 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.061 [2024-11-27 06:35:51.321887] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.061 [2024-11-27 06:35:51.321919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.061 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.061 [2024-11-27 06:35:51.334750] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.061 [2024-11-27 06:35:51.334772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.061 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.061 [2024-11-27 06:35:51.344103] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.061 [2024-11-27 06:35:51.344125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.061 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.061 [2024-11-27 06:35:51.357946] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.061 [2024-11-27 06:35:51.357967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.061 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.061 [2024-11-27 06:35:51.369681] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.061 [2024-11-27 06:35:51.369704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.061 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.061 [2024-11-27 06:35:51.382526] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.061 [2024-11-27 06:35:51.382547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.061 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.061 [2024-11-27 06:35:51.391811] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.061 [2024-11-27 06:35:51.391832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.061 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.061 [2024-11-27 06:35:51.405890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.061 [2024-11-27 06:35:51.405920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.061 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.061 [2024-11-27 06:35:51.418424] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.061 [2024-11-27 06:35:51.418447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.061 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.061 [2024-11-27 06:35:51.429378] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.061 [2024-11-27 06:35:51.429400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.061 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.061 [2024-11-27 06:35:51.442846] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.061 [2024-11-27 06:35:51.442867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.061 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.061 [2024-11-27 06:35:51.453340] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.061 [2024-11-27 06:35:51.453362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.062 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.062 [2024-11-27 06:35:51.466686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.062 [2024-11-27 06:35:51.466706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.062 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.062 [2024-11-27 06:35:51.476102] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.062 [2024-11-27 06:35:51.476124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.062 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.062 [2024-11-27 06:35:51.490119] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.062 [2024-11-27 06:35:51.490141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.062 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.062 [2024-11-27 06:35:51.499703] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.062 [2024-11-27 06:35:51.499725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.062 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.062 [2024-11-27 06:35:51.513523] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.062 [2024-11-27 06:35:51.513545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.062 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.062 [2024-11-27 06:35:51.524060] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.062 [2024-11-27 06:35:51.524082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.062 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.062 [2024-11-27 06:35:51.538349] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.062 [2024-11-27 06:35:51.538373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.062 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.062 [2024-11-27 06:35:51.549173] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.062 [2024-11-27 06:35:51.549196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.062 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.062 [2024-11-27 06:35:51.555322] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.062 [2024-11-27 06:35:51.555344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.062 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.062 [2024-11-27 06:35:51.564866] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.062 [2024-11-27 06:35:51.564890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.062 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.335 [2024-11-27 06:35:51.578713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.335 [2024-11-27 06:35:51.578736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.335 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.335 [2024-11-27 06:35:51.587383] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.335 [2024-11-27 06:35:51.587406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.335 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.335 [2024-11-27 06:35:51.596815] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.335 [2024-11-27 06:35:51.596838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.335 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.335 [2024-11-27 06:35:51.610650] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.335 [2024-11-27 06:35:51.610672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.335 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.335 [2024-11-27 06:35:51.619096] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.335 [2024-11-27 06:35:51.619119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.335 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.335 [2024-11-27 06:35:51.629385] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.335 [2024-11-27 06:35:51.629407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.335 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.335 [2024-11-27 06:35:51.642853] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.335 [2024-11-27 06:35:51.642876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.335 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.335 [2024-11-27 06:35:51.656877] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.335 [2024-11-27 06:35:51.656900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.335 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.335 [2024-11-27 06:35:51.669973] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.335 [2024-11-27 06:35:51.669995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.335 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.335 [2024-11-27 06:35:51.682541] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.335 [2024-11-27 06:35:51.682564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.335 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.335 [2024-11-27 06:35:51.694407] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.335 [2024-11-27 06:35:51.694430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.335 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.335 [2024-11-27 06:35:51.705408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.335 [2024-11-27 06:35:51.705430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.335 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.335 [2024-11-27 06:35:51.718866] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.335 [2024-11-27 06:35:51.718889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.336 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.336 [2024-11-27 06:35:51.728851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.336 [2024-11-27 06:35:51.728874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.336 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.336 [2024-11-27 06:35:51.742483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.336 [2024-11-27 06:35:51.742505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.336 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.336 [2024-11-27 06:35:51.752829] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.336 [2024-11-27 06:35:51.752852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.336 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.336 [2024-11-27 06:35:51.767018] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.336 [2024-11-27 06:35:51.767041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.336 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.336 [2024-11-27 06:35:51.776436] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.336 [2024-11-27 06:35:51.776459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.336 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.336 [2024-11-27 06:35:51.790318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.336 [2024-11-27 06:35:51.790341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.336 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.336 [2024-11-27 06:35:51.799629] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.336 [2024-11-27 06:35:51.799652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.336 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.336 [2024-11-27 06:35:51.813586] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.336 [2024-11-27 06:35:51.813608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.336 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.336 [2024-11-27 06:35:51.823828] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.336 [2024-11-27 06:35:51.823850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.336 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.336 [2024-11-27 06:35:51.837642] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.336 [2024-11-27 06:35:51.837665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.336 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.336 [2024-11-27 06:35:51.848482] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.336 [2024-11-27 06:35:51.848505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.336 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.336 [2024-11-27 06:35:51.862640] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.336 [2024-11-27 06:35:51.862663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.595 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.595 [2024-11-27 06:35:51.873427] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.595 [2024-11-27 06:35:51.873450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.595 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.595 [2024-11-27 06:35:51.886836] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.595 [2024-11-27 06:35:51.886859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.595 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.595 [2024-11-27 06:35:51.897403] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.595 [2024-11-27 06:35:51.897425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.595 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.595 [2024-11-27 06:35:51.910652] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.595 [2024-11-27 06:35:51.910675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.595 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.595 [2024-11-27 06:35:51.921428] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.595 [2024-11-27 06:35:51.921448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.595 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.595 [2024-11-27 06:35:51.934872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.595 [2024-11-27 06:35:51.934895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.595 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.595 [2024-11-27 06:35:51.944893] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.595 [2024-11-27 06:35:51.944925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.595 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.595 [2024-11-27 06:35:51.958954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.595 [2024-11-27 06:35:51.958977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.595 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.595 [2024-11-27 06:35:51.973127] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.595 [2024-11-27 06:35:51.973150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.595 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.595 [2024-11-27 06:35:51.980424] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.595 [2024-11-27 06:35:51.980447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.595 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.595 [2024-11-27 06:35:51.994133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.595 [2024-11-27 06:35:51.994155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.596 2024/11/27 06:35:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.596 [2024-11-27 06:35:52.006240] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.596 [2024-11-27 06:35:52.006261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.596 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.596 [2024-11-27 06:35:52.015722] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.596 [2024-11-27 06:35:52.015745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.596 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.596 [2024-11-27 06:35:52.022009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.596 [2024-11-27 06:35:52.022031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.596 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.596 [2024-11-27 06:35:52.032659] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.596 [2024-11-27 06:35:52.032682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.596 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.596 [2024-11-27 06:35:52.046767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.596 [2024-11-27 06:35:52.046790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.596 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.596 [2024-11-27 06:35:52.056333] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.596 [2024-11-27 06:35:52.056354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.596 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.596 [2024-11-27 06:35:52.070466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.596 [2024-11-27 06:35:52.070489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.596 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.596 [2024-11-27 06:35:52.081076] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.596 [2024-11-27 06:35:52.081096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.596 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.596 [2024-11-27 06:35:52.087380] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.596 [2024-11-27 06:35:52.087402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.596 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.596 [2024-11-27 06:35:52.097006] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.596 [2024-11-27 06:35:52.097029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.596 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.596 [2024-11-27 06:35:52.103351] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.596 [2024-11-27 06:35:52.103374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.596 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.596 [2024-11-27 06:35:52.117252] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.596 [2024-11-27 06:35:52.117274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.596 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.596 [2024-11-27 06:35:52.123626] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.596 [2024-11-27 06:35:52.123648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.856 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.856 [2024-11-27 06:35:52.132694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.856 [2024-11-27 06:35:52.132716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.856 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.856 [2024-11-27 06:35:52.146674] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.856 [2024-11-27 06:35:52.146697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.856 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.856 [2024-11-27 06:35:52.156098] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.856 [2024-11-27 06:35:52.156121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.856 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.856 [2024-11-27 06:35:52.169949] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.856 [2024-11-27 06:35:52.169970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.856 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.856 [2024-11-27 06:35:52.180478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.856 [2024-11-27 06:35:52.180501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.856 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.856 [2024-11-27 06:35:52.194417] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.856 [2024-11-27 06:35:52.194440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.856 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.856 [2024-11-27 06:35:52.204657] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.856 [2024-11-27 06:35:52.204681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.856 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.856 [2024-11-27 06:35:52.218826] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.856 [2024-11-27 06:35:52.218848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.856 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.856 [2024-11-27 06:35:52.228238] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.856 [2024-11-27 06:35:52.228262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.856 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.856 [2024-11-27 06:35:52.241951] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.857 [2024-11-27 06:35:52.241973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.857 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.857 [2024-11-27 06:35:52.252396] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.857 [2024-11-27 06:35:52.252418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.857 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.857 17631.40 IOPS, 137.75 MiB/s [2024-11-27T06:35:52.387Z] [2024-11-27 06:35:52.262190] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.857 [2024-11-27 06:35:52.262212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.857 00:33:32.857 Latency(us) 00:33:32.857 [2024-11-27T06:35:52.387Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:32.857 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:33:32.857 Nvme1n1 : 5.01 17631.24 137.74 0.00 0.00 7252.65 1940.87 12250.19 00:33:32.857 [2024-11-27T06:35:52.387Z] =================================================================================================================== 00:33:32.857 [2024-11-27T06:35:52.387Z] Total : 17631.24 137.74 0.00 0.00 7252.65 1940.87 12250.19 00:33:32.857 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.857 [2024-11-27 06:35:52.269114] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.857 [2024-11-27 06:35:52.269132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.857 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.857 [2024-11-27 06:35:52.277111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.857 [2024-11-27 06:35:52.277128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.857 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.857 [2024-11-27 06:35:52.285108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.857 [2024-11-27 06:35:52.285121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.857 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.857 [2024-11-27 06:35:52.293107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.857 [2024-11-27 06:35:52.293123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.857 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.857 [2024-11-27 06:35:52.301108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.857 [2024-11-27 06:35:52.301122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.857 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.857 [2024-11-27 06:35:52.309108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.857 [2024-11-27 06:35:52.309121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.857 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.857 [2024-11-27 06:35:52.317108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.857 [2024-11-27 06:35:52.317121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.857 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.857 [2024-11-27 06:35:52.325107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.857 [2024-11-27 06:35:52.325121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.857 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.857 [2024-11-27 06:35:52.333108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.857 [2024-11-27 06:35:52.333121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.857 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.857 [2024-11-27 06:35:52.341108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.857 [2024-11-27 06:35:52.341122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.857 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.857 [2024-11-27 06:35:52.349108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.857 [2024-11-27 06:35:52.349121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.857 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.857 [2024-11-27 06:35:52.357107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.857 [2024-11-27 06:35:52.357121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.857 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.857 [2024-11-27 06:35:52.365108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.857 [2024-11-27 06:35:52.365121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.857 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.857 [2024-11-27 06:35:52.373107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.857 [2024-11-27 06:35:52.373121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.857 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:32.857 [2024-11-27 06:35:52.381108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.857 [2024-11-27 06:35:52.381121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.117 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.117 [2024-11-27 06:35:52.389108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.117 [2024-11-27 06:35:52.389121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.117 2024/11/27 06:35:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:33.117 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (124232) - No such process 00:33:33.117 06:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 124232 00:33:33.117 06:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:33.117 06:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.117 06:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:33.117 06:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.117 06:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:33.117 06:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.117 06:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:33.117 delay0 00:33:33.117 06:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.117 06:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:33:33.117 06:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.117 06:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:33.117 06:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.117 06:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:33:33.117 [2024-11-27 06:35:52.569466] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:33:41.230 Initializing NVMe Controllers 00:33:41.230 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:33:41.230 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:41.230 Initialization complete. Launching workers. 00:33:41.230 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 215, failed: 42439 00:33:41.230 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 42496, failed to submit 158 00:33:41.230 success 42444, unsuccessful 52, failed 0 00:33:41.230 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:33:41.230 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:33:41.230 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:41.230 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:33:41.230 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:41.230 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:33:41.230 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:41.230 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:41.230 rmmod nvme_tcp 00:33:41.230 rmmod nvme_fabrics 00:33:41.230 rmmod nvme_keyring 00:33:41.230 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:41.230 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:33:41.230 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:33:41.230 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 124064 ']' 00:33:41.230 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 124064 00:33:41.230 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 124064 ']' 00:33:41.230 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 124064 00:33:41.230 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:33:41.230 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:41.230 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 124064 00:33:41.230 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:41.230 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:41.230 killing process with pid 124064 00:33:41.231 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 124064' 00:33:41.231 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 124064 00:33:41.231 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 124064 00:33:41.231 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:41.231 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:41.231 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:41.231 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:33:41.231 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:33:41.231 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:41.231 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:33:41.231 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:41.231 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:33:41.231 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:33:41.231 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:33:41.231 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:33:41.231 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:33:41.231 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:33:41.231 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:33:41.231 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:33:41.231 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:33:41.231 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:33:41.231 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:33:41.231 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:33:41.231 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:41.231 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:41.231 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:33:41.231 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:41.231 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:41.231 06:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:33:41.231 00:33:41.231 real 0m25.200s 00:33:41.231 user 0m38.734s 00:33:41.231 sys 0m7.332s 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:41.231 ************************************ 00:33:41.231 END TEST nvmf_zcopy 00:33:41.231 ************************************ 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:41.231 ************************************ 00:33:41.231 START TEST nvmf_nmic 00:33:41.231 ************************************ 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:41.231 * Looking for test storage... 00:33:41.231 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:41.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.231 --rc genhtml_branch_coverage=1 00:33:41.231 --rc genhtml_function_coverage=1 00:33:41.231 --rc genhtml_legend=1 00:33:41.231 --rc geninfo_all_blocks=1 00:33:41.231 --rc geninfo_unexecuted_blocks=1 00:33:41.231 00:33:41.231 ' 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:41.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.231 --rc genhtml_branch_coverage=1 00:33:41.231 --rc genhtml_function_coverage=1 00:33:41.231 --rc genhtml_legend=1 00:33:41.231 --rc geninfo_all_blocks=1 00:33:41.231 --rc geninfo_unexecuted_blocks=1 00:33:41.231 00:33:41.231 ' 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:41.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.231 --rc genhtml_branch_coverage=1 00:33:41.231 --rc genhtml_function_coverage=1 00:33:41.231 --rc genhtml_legend=1 00:33:41.231 --rc geninfo_all_blocks=1 00:33:41.231 --rc geninfo_unexecuted_blocks=1 00:33:41.231 00:33:41.231 ' 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:41.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.231 --rc genhtml_branch_coverage=1 00:33:41.231 --rc genhtml_function_coverage=1 00:33:41.231 --rc genhtml_legend=1 00:33:41.231 --rc geninfo_all_blocks=1 00:33:41.231 --rc geninfo_unexecuted_blocks=1 00:33:41.231 00:33:41.231 ' 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:41.231 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:41.232 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:33:41.233 Cannot find device "nvmf_init_br" 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:33:41.233 Cannot find device "nvmf_init_br2" 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:33:41.233 Cannot find device "nvmf_tgt_br" 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:33:41.233 Cannot find device "nvmf_tgt_br2" 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:33:41.233 Cannot find device "nvmf_init_br" 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:33:41.233 Cannot find device "nvmf_init_br2" 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:33:41.233 Cannot find device "nvmf_tgt_br" 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:33:41.233 Cannot find device "nvmf_tgt_br2" 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:33:41.233 Cannot find device "nvmf_br" 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:33:41.233 Cannot find device "nvmf_init_if" 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:33:41.233 Cannot find device "nvmf_init_if2" 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:41.233 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:41.233 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:33:41.233 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:41.233 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:33:41.233 00:33:41.233 --- 10.0.0.3 ping statistics --- 00:33:41.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:41.233 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:33:41.233 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:33:41.233 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:33:41.233 00:33:41.233 --- 10.0.0.4 ping statistics --- 00:33:41.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:41.233 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:41.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:41.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:33:41.233 00:33:41.233 --- 10.0.0.1 ping statistics --- 00:33:41.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:41.233 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:33:41.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:41.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:33:41.233 00:33:41.233 --- 10.0.0.2 ping statistics --- 00:33:41.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:41.233 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:41.233 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:41.234 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:41.234 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:41.234 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:41.234 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:41.234 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:33:41.234 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:41.234 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:41.234 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:41.234 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=124615 00:33:41.234 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 124615 00:33:41.234 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 124615 ']' 00:33:41.234 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:41.234 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:41.234 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:41.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:41.234 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:41.234 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:41.234 06:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:41.234 [2024-11-27 06:36:00.675284] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:41.234 [2024-11-27 06:36:00.676129] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:33:41.234 [2024-11-27 06:36:00.676186] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:41.496 [2024-11-27 06:36:00.796407] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:41.496 [2024-11-27 06:36:00.821088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:41.496 [2024-11-27 06:36:00.851610] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:41.496 [2024-11-27 06:36:00.851651] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:41.496 [2024-11-27 06:36:00.851658] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:41.496 [2024-11-27 06:36:00.851663] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:41.496 [2024-11-27 06:36:00.851668] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:41.496 [2024-11-27 06:36:00.852666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:41.496 [2024-11-27 06:36:00.852988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:41.496 [2024-11-27 06:36:00.854074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:41.496 [2024-11-27 06:36:00.854199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:41.496 [2024-11-27 06:36:00.953804] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:41.496 [2024-11-27 06:36:00.955145] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:41.496 [2024-11-27 06:36:00.955311] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:41.496 [2024-11-27 06:36:00.955572] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:41.496 [2024-11-27 06:36:00.955855] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:42.067 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:42.067 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:33:42.067 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:42.067 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:42.067 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:42.328 [2024-11-27 06:36:01.632015] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:42.328 Malloc0 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:42.328 [2024-11-27 06:36:01.711061] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.328 test case1: single bdev can't be used in multiple subsystems 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:42.328 [2024-11-27 06:36:01.734743] bdev.c:8507:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:33:42.328 [2024-11-27 06:36:01.734781] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:33:42.328 [2024-11-27 06:36:01.734790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.328 2024/11/27 06:36:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:33:42.328 request: 00:33:42.328 { 00:33:42.328 "method": "nvmf_subsystem_add_ns", 00:33:42.328 "params": { 00:33:42.328 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:33:42.328 "namespace": { 00:33:42.328 "bdev_name": "Malloc0", 00:33:42.328 "no_auto_visible": false, 00:33:42.328 "hide_metadata": false 00:33:42.328 } 00:33:42.328 } 00:33:42.328 } 00:33:42.328 Got JSON-RPC error response 00:33:42.328 GoRPCClient: error on JSON-RPC call 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:33:42.328 Adding namespace failed - expected result. 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:33:42.328 test case2: host connect to nvmf target in multiple paths 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:42.328 [2024-11-27 06:36:01.746797] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:33:42.328 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:33:42.588 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:33:42.588 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:33:42.588 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:33:42.588 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:33:42.589 06:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:33:44.505 06:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:33:44.505 06:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:33:44.505 06:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:33:44.505 06:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:33:44.505 06:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:33:44.505 06:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:33:44.505 06:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:33:44.505 [global] 00:33:44.505 thread=1 00:33:44.505 invalidate=1 00:33:44.505 rw=write 00:33:44.505 time_based=1 00:33:44.505 runtime=1 00:33:44.505 ioengine=libaio 00:33:44.505 direct=1 00:33:44.505 bs=4096 00:33:44.505 iodepth=1 00:33:44.505 norandommap=0 00:33:44.505 numjobs=1 00:33:44.505 00:33:44.505 verify_dump=1 00:33:44.505 verify_backlog=512 00:33:44.505 verify_state_save=0 00:33:44.505 do_verify=1 00:33:44.505 verify=crc32c-intel 00:33:44.505 [job0] 00:33:44.505 filename=/dev/nvme0n1 00:33:44.505 Could not set queue depth (nvme0n1) 00:33:44.766 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:44.766 fio-3.35 00:33:44.766 Starting 1 thread 00:33:45.708 00:33:45.708 job0: (groupid=0, jobs=1): err= 0: pid=124719: Wed Nov 27 06:36:05 2024 00:33:45.708 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:33:45.708 slat (nsec): min=6058, max=18478, avg=6577.85, stdev=745.33 00:33:45.708 clat (usec): min=85, max=152, avg=102.50, stdev= 5.97 00:33:45.708 lat (usec): min=92, max=158, avg=109.08, stdev= 5.97 00:33:45.708 clat percentiles (usec): 00:33:45.708 | 1.00th=[ 91], 5.00th=[ 94], 10.00th=[ 96], 20.00th=[ 98], 00:33:45.708 | 30.00th=[ 99], 40.00th=[ 101], 50.00th=[ 102], 60.00th=[ 104], 00:33:45.708 | 70.00th=[ 105], 80.00th=[ 108], 90.00th=[ 111], 95.00th=[ 114], 00:33:45.708 | 99.00th=[ 120], 99.50th=[ 123], 99.90th=[ 129], 99.95th=[ 133], 00:33:45.708 | 99.99th=[ 153] 00:33:45.708 write: IOPS=5202, BW=20.3MiB/s (21.3MB/s)(20.3MiB/1001msec); 0 zone resets 00:33:45.708 slat (nsec): min=9351, max=96861, avg=10484.05, stdev=3169.30 00:33:45.708 clat (usec): min=59, max=748, avg=72.66, stdev=11.48 00:33:45.708 lat (usec): min=69, max=783, avg=83.15, stdev=12.50 00:33:45.708 clat percentiles (usec): 00:33:45.708 | 1.00th=[ 63], 5.00th=[ 66], 10.00th=[ 68], 20.00th=[ 69], 00:33:45.708 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 74], 00:33:45.708 | 70.00th=[ 75], 80.00th=[ 77], 90.00th=[ 79], 95.00th=[ 81], 00:33:45.708 | 99.00th=[ 87], 99.50th=[ 92], 99.90th=[ 176], 99.95th=[ 229], 00:33:45.708 | 99.99th=[ 750] 00:33:45.708 bw ( KiB/s): min=20728, max=20728, per=99.60%, avg=20728.00, stdev= 0.00, samples=1 00:33:45.708 iops : min= 5182, max= 5182, avg=5182.00, stdev= 0.00, samples=1 00:33:45.708 lat (usec) : 100=67.60%, 250=32.38%, 500=0.01%, 750=0.01% 00:33:45.708 cpu : usr=1.70%, sys=6.50%, ctx=10328, majf=0, minf=5 00:33:45.708 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:45.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:45.708 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:45.708 issued rwts: total=5120,5208,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:45.708 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:45.708 00:33:45.708 Run status group 0 (all jobs): 00:33:45.708 READ: bw=20.0MiB/s (20.9MB/s), 20.0MiB/s-20.0MiB/s (20.9MB/s-20.9MB/s), io=20.0MiB (21.0MB), run=1001-1001msec 00:33:45.708 WRITE: bw=20.3MiB/s (21.3MB/s), 20.3MiB/s-20.3MiB/s (21.3MB/s-21.3MB/s), io=20.3MiB (21.3MB), run=1001-1001msec 00:33:45.708 00:33:45.708 Disk stats (read/write): 00:33:45.708 nvme0n1: ios=4658/4692, merge=0/0, ticks=493/359, in_queue=852, util=90.88% 00:33:45.708 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:45.968 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:33:45.968 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:45.968 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:33:45.968 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:45.968 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:45.968 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:45.968 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:45.968 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:33:45.968 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:33:45.968 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:33:45.968 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:45.968 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:33:45.968 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:45.968 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:33:45.968 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:45.968 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:45.968 rmmod nvme_tcp 00:33:45.968 rmmod nvme_fabrics 00:33:45.968 rmmod nvme_keyring 00:33:45.968 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:45.968 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:33:45.968 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:33:45.968 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 124615 ']' 00:33:45.968 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 124615 00:33:45.968 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 124615 ']' 00:33:45.968 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 124615 00:33:45.968 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:33:45.968 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:45.968 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 124615 00:33:45.968 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:45.968 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:45.968 killing process with pid 124615 00:33:45.968 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 124615' 00:33:45.968 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 124615 00:33:45.968 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 124615 00:33:46.228 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:46.228 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:46.228 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:46.228 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:33:46.228 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:46.228 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:33:46.228 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:33:46.228 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:46.228 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:33:46.228 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:33:46.228 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:33:46.228 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:33:46.228 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:33:46.228 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:33:46.228 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:33:46.228 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:33:46.228 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:33:46.228 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:33:46.228 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:33:46.228 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:33:46.228 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:46.228 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:46.228 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:33:46.228 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:46.228 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:46.228 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:46.489 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:33:46.489 00:33:46.489 real 0m5.683s 00:33:46.489 user 0m14.843s 00:33:46.489 sys 0m1.521s 00:33:46.489 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:46.489 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:46.489 ************************************ 00:33:46.489 END TEST nvmf_nmic 00:33:46.489 ************************************ 00:33:46.489 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:46.489 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:46.489 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:46.489 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:46.489 ************************************ 00:33:46.489 START TEST nvmf_fio_target 00:33:46.489 ************************************ 00:33:46.489 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:46.489 * Looking for test storage... 00:33:46.489 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:46.489 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:46.489 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:33:46.489 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:46.489 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:46.489 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:46.489 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:46.489 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:46.489 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:46.489 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:46.489 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:46.489 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:46.489 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:46.489 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:46.489 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:46.489 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:46.489 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:33:46.489 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:33:46.489 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:46.489 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:46.489 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:33:46.489 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:33:46.489 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:46.489 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:33:46.489 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:46.489 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:33:46.489 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:33:46.489 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:46.489 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:33:46.489 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:46.489 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:46.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.490 --rc genhtml_branch_coverage=1 00:33:46.490 --rc genhtml_function_coverage=1 00:33:46.490 --rc genhtml_legend=1 00:33:46.490 --rc geninfo_all_blocks=1 00:33:46.490 --rc geninfo_unexecuted_blocks=1 00:33:46.490 00:33:46.490 ' 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:46.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.490 --rc genhtml_branch_coverage=1 00:33:46.490 --rc genhtml_function_coverage=1 00:33:46.490 --rc genhtml_legend=1 00:33:46.490 --rc geninfo_all_blocks=1 00:33:46.490 --rc geninfo_unexecuted_blocks=1 00:33:46.490 00:33:46.490 ' 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:46.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.490 --rc genhtml_branch_coverage=1 00:33:46.490 --rc genhtml_function_coverage=1 00:33:46.490 --rc genhtml_legend=1 00:33:46.490 --rc geninfo_all_blocks=1 00:33:46.490 --rc geninfo_unexecuted_blocks=1 00:33:46.490 00:33:46.490 ' 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:46.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.490 --rc genhtml_branch_coverage=1 00:33:46.490 --rc genhtml_function_coverage=1 00:33:46.490 --rc genhtml_legend=1 00:33:46.490 --rc geninfo_all_blocks=1 00:33:46.490 --rc geninfo_unexecuted_blocks=1 00:33:46.490 00:33:46.490 ' 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:33:46.490 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:46.491 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:33:46.491 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:46.491 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:33:46.491 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:46.491 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:46.491 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:46.491 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:46.491 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:46.491 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:46.491 06:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:33:46.491 Cannot find device "nvmf_init_br" 00:33:46.491 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:33:46.491 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:33:46.491 Cannot find device "nvmf_init_br2" 00:33:46.491 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:33:46.491 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:33:46.750 Cannot find device "nvmf_tgt_br" 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:33:46.750 Cannot find device "nvmf_tgt_br2" 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:33:46.750 Cannot find device "nvmf_init_br" 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:33:46.750 Cannot find device "nvmf_init_br2" 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:33:46.750 Cannot find device "nvmf_tgt_br" 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:33:46.750 Cannot find device "nvmf_tgt_br2" 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:33:46.750 Cannot find device "nvmf_br" 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:33:46.750 Cannot find device "nvmf_init_if" 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:33:46.750 Cannot find device "nvmf_init_if2" 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:46.750 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:46.750 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:33:46.750 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:46.750 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:33:46.750 00:33:46.750 --- 10.0.0.3 ping statistics --- 00:33:46.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.750 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:33:46.750 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:33:46.750 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:33:46.750 00:33:46.750 --- 10.0.0.4 ping statistics --- 00:33:46.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.750 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:46.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:46.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:33:46.750 00:33:46.750 --- 10.0.0.1 ping statistics --- 00:33:46.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.750 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:33:46.750 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:33:46.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:46.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:33:46.750 00:33:46.750 --- 10.0.0.2 ping statistics --- 00:33:46.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.750 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:33:47.010 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:47.010 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:33:47.010 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:47.010 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:47.010 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:47.010 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:47.010 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:47.010 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:47.010 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:47.010 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:33:47.010 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:47.010 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:47.010 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:47.010 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=124955 00:33:47.010 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 124955 00:33:47.010 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 124955 ']' 00:33:47.010 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:47.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:47.010 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:47.010 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:47.010 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:47.010 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:47.010 06:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:47.010 [2024-11-27 06:36:06.342554] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:47.010 [2024-11-27 06:36:06.343317] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:33:47.010 [2024-11-27 06:36:06.343360] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:47.010 [2024-11-27 06:36:06.459809] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:47.010 [2024-11-27 06:36:06.482567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:47.010 [2024-11-27 06:36:06.506068] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:47.010 [2024-11-27 06:36:06.506098] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:47.010 [2024-11-27 06:36:06.506103] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:47.010 [2024-11-27 06:36:06.506107] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:47.010 [2024-11-27 06:36:06.506111] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:47.010 [2024-11-27 06:36:06.506886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:47.010 [2024-11-27 06:36:06.506987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:47.010 [2024-11-27 06:36:06.507225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:47.010 [2024-11-27 06:36:06.507225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:47.269 [2024-11-27 06:36:06.568735] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:47.269 [2024-11-27 06:36:06.569095] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:47.269 [2024-11-27 06:36:06.569440] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:47.269 [2024-11-27 06:36:06.569846] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:47.269 [2024-11-27 06:36:06.570367] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:47.841 06:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:47.841 06:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:33:47.841 06:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:47.841 06:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:47.841 06:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:47.842 06:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:47.842 06:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:48.104 [2024-11-27 06:36:07.468324] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:48.104 06:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:48.364 06:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:33:48.364 06:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:48.625 06:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:33:48.625 06:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:48.886 06:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:33:48.886 06:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:49.147 06:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:33:49.147 06:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:33:49.147 06:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:49.409 06:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:33:49.409 06:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:49.670 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:33:49.670 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:49.932 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:33:49.932 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:33:50.194 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:50.455 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:33:50.455 06:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:50.716 06:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:33:50.716 06:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:50.976 06:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:33:50.976 [2024-11-27 06:36:10.468318] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:33:50.976 06:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:33:51.236 06:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:33:51.496 06:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:33:51.496 06:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:33:51.496 06:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:33:51.496 06:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:33:51.496 06:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:33:51.496 06:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:33:51.497 06:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:33:54.033 06:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:33:54.033 06:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:33:54.033 06:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:33:54.033 06:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:33:54.033 06:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:33:54.033 06:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:33:54.033 06:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:33:54.033 [global] 00:33:54.034 thread=1 00:33:54.034 invalidate=1 00:33:54.034 rw=write 00:33:54.034 time_based=1 00:33:54.034 runtime=1 00:33:54.034 ioengine=libaio 00:33:54.034 direct=1 00:33:54.034 bs=4096 00:33:54.034 iodepth=1 00:33:54.034 norandommap=0 00:33:54.034 numjobs=1 00:33:54.034 00:33:54.034 verify_dump=1 00:33:54.034 verify_backlog=512 00:33:54.034 verify_state_save=0 00:33:54.034 do_verify=1 00:33:54.034 verify=crc32c-intel 00:33:54.034 [job0] 00:33:54.034 filename=/dev/nvme0n1 00:33:54.034 [job1] 00:33:54.034 filename=/dev/nvme0n2 00:33:54.034 [job2] 00:33:54.034 filename=/dev/nvme0n3 00:33:54.034 [job3] 00:33:54.034 filename=/dev/nvme0n4 00:33:54.034 Could not set queue depth (nvme0n1) 00:33:54.034 Could not set queue depth (nvme0n2) 00:33:54.034 Could not set queue depth (nvme0n3) 00:33:54.034 Could not set queue depth (nvme0n4) 00:33:54.034 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:54.034 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:54.034 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:54.034 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:54.034 fio-3.35 00:33:54.034 Starting 4 threads 00:33:54.969 00:33:54.969 job0: (groupid=0, jobs=1): err= 0: pid=125232: Wed Nov 27 06:36:14 2024 00:33:54.969 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:33:54.969 slat (nsec): min=8037, max=70151, avg=10777.53, stdev=4249.90 00:33:54.969 clat (usec): min=114, max=7356, avg=253.88, stdev=229.07 00:33:54.969 lat (usec): min=124, max=7365, avg=264.66, stdev=229.38 00:33:54.969 clat percentiles (usec): 00:33:54.969 | 1.00th=[ 131], 5.00th=[ 139], 10.00th=[ 149], 20.00th=[ 172], 00:33:54.969 | 30.00th=[ 235], 40.00th=[ 243], 50.00th=[ 251], 60.00th=[ 258], 00:33:54.969 | 70.00th=[ 265], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 318], 00:33:54.969 | 99.00th=[ 799], 99.50th=[ 857], 99.90th=[ 1156], 99.95th=[ 6521], 00:33:54.969 | 99.99th=[ 7373] 00:33:54.969 write: IOPS=2080, BW=8324KiB/s (8523kB/s)(8332KiB/1001msec); 0 zone resets 00:33:54.969 slat (nsec): min=12292, max=82947, avg=17895.09, stdev=3906.54 00:33:54.969 clat (usec): min=84, max=7178, avg=199.75, stdev=173.86 00:33:54.969 lat (usec): min=98, max=7192, avg=217.64, stdev=173.89 00:33:54.969 clat percentiles (usec): 00:33:54.969 | 1.00th=[ 97], 5.00th=[ 104], 10.00th=[ 111], 20.00th=[ 141], 00:33:54.969 | 30.00th=[ 155], 40.00th=[ 174], 50.00th=[ 217], 60.00th=[ 225], 00:33:54.969 | 70.00th=[ 233], 80.00th=[ 243], 90.00th=[ 255], 95.00th=[ 265], 00:33:54.969 | 99.00th=[ 293], 99.50th=[ 314], 99.90th=[ 1565], 99.95th=[ 2474], 00:33:54.969 | 99.99th=[ 7177] 00:33:54.969 bw ( KiB/s): min= 8192, max= 8192, per=22.59%, avg=8192.00, stdev= 0.00, samples=1 00:33:54.969 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:33:54.969 lat (usec) : 100=1.02%, 250=67.01%, 500=31.01%, 750=0.10%, 1000=0.73% 00:33:54.969 lat (msec) : 2=0.05%, 4=0.02%, 10=0.07% 00:33:54.969 cpu : usr=0.60%, sys=4.40%, ctx=4141, majf=0, minf=5 00:33:54.969 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:54.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.969 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.969 issued rwts: total=2048,2083,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.969 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:54.969 job1: (groupid=0, jobs=1): err= 0: pid=125233: Wed Nov 27 06:36:14 2024 00:33:54.969 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:33:54.969 slat (nsec): min=4305, max=55256, avg=9418.92, stdev=4126.46 00:33:54.969 clat (usec): min=164, max=41108, avg=368.29, stdev=1043.83 00:33:54.969 lat (usec): min=172, max=41115, avg=377.71, stdev=1043.80 00:33:54.969 clat percentiles (usec): 00:33:54.969 | 1.00th=[ 219], 5.00th=[ 277], 10.00th=[ 289], 20.00th=[ 297], 00:33:54.969 | 30.00th=[ 306], 40.00th=[ 310], 50.00th=[ 318], 60.00th=[ 330], 00:33:54.969 | 70.00th=[ 355], 80.00th=[ 383], 90.00th=[ 404], 95.00th=[ 424], 00:33:54.969 | 99.00th=[ 807], 99.50th=[ 857], 99.90th=[ 1401], 99.95th=[41157], 00:33:54.969 | 99.99th=[41157] 00:33:54.969 write: IOPS=1701, BW=6805KiB/s (6969kB/s)(6812KiB/1001msec); 0 zone resets 00:33:54.969 slat (usec): min=7, max=138, avg=15.85, stdev= 7.63 00:33:54.969 clat (usec): min=100, max=2018, avg=228.50, stdev=58.53 00:33:54.969 lat (usec): min=135, max=2031, avg=244.35, stdev=59.33 00:33:54.969 clat percentiles (usec): 00:33:54.969 | 1.00th=[ 163], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 186], 00:33:54.969 | 30.00th=[ 217], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 241], 00:33:54.969 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 265], 95.00th=[ 273], 00:33:54.969 | 99.00th=[ 310], 99.50th=[ 347], 99.90th=[ 783], 99.95th=[ 2024], 00:33:54.969 | 99.99th=[ 2024] 00:33:54.969 bw ( KiB/s): min= 8192, max= 8192, per=22.59%, avg=8192.00, stdev= 0.00, samples=1 00:33:54.969 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:33:54.969 lat (usec) : 250=41.09%, 500=57.67%, 750=0.28%, 1000=0.83% 00:33:54.969 lat (msec) : 2=0.06%, 4=0.03%, 50=0.03% 00:33:54.969 cpu : usr=0.70%, sys=3.40%, ctx=3240, majf=0, minf=15 00:33:54.969 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:54.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.969 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.970 issued rwts: total=1536,1703,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.970 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:54.970 job2: (groupid=0, jobs=1): err= 0: pid=125234: Wed Nov 27 06:36:14 2024 00:33:54.970 read: IOPS=3244, BW=12.7MiB/s (13.3MB/s)(12.7MiB/1001msec) 00:33:54.970 slat (nsec): min=6688, max=67428, avg=8899.84, stdev=2602.01 00:33:54.970 clat (usec): min=117, max=353, avg=149.90, stdev=16.81 00:33:54.970 lat (usec): min=125, max=361, avg=158.80, stdev=17.25 00:33:54.970 clat percentiles (usec): 00:33:54.970 | 1.00th=[ 128], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 139], 00:33:54.970 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 149], 00:33:54.970 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 167], 95.00th=[ 176], 00:33:54.970 | 99.00th=[ 223], 99.50th=[ 239], 99.90th=[ 277], 99.95th=[ 302], 00:33:54.970 | 99.99th=[ 355] 00:33:54.970 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:33:54.970 slat (nsec): min=10509, max=99182, avg=14957.46, stdev=5151.26 00:33:54.970 clat (usec): min=82, max=1633, avg=118.51, stdev=34.51 00:33:54.970 lat (usec): min=96, max=1647, avg=133.47, stdev=35.46 00:33:54.970 clat percentiles (usec): 00:33:54.970 | 1.00th=[ 95], 5.00th=[ 100], 10.00th=[ 102], 20.00th=[ 105], 00:33:54.970 | 30.00th=[ 108], 40.00th=[ 110], 50.00th=[ 112], 60.00th=[ 115], 00:33:54.970 | 70.00th=[ 118], 80.00th=[ 125], 90.00th=[ 149], 95.00th=[ 161], 00:33:54.970 | 99.00th=[ 190], 99.50th=[ 227], 99.90th=[ 404], 99.95th=[ 562], 00:33:54.970 | 99.99th=[ 1631] 00:33:54.970 bw ( KiB/s): min=13936, max=13936, per=38.42%, avg=13936.00, stdev= 0.00, samples=1 00:33:54.970 iops : min= 3484, max= 3484, avg=3484.00, stdev= 0.00, samples=1 00:33:54.970 lat (usec) : 100=3.02%, 250=96.69%, 500=0.26%, 750=0.01% 00:33:54.970 lat (msec) : 2=0.01% 00:33:54.970 cpu : usr=0.90%, sys=5.90%, ctx=6832, majf=0, minf=11 00:33:54.970 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:54.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.970 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.970 issued rwts: total=3248,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.970 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:54.970 job3: (groupid=0, jobs=1): err= 0: pid=125235: Wed Nov 27 06:36:14 2024 00:33:54.970 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:33:54.970 slat (nsec): min=4474, max=61826, avg=8352.53, stdev=4099.18 00:33:54.970 clat (usec): min=157, max=41092, avg=368.95, stdev=1043.48 00:33:54.970 lat (usec): min=165, max=41111, avg=377.31, stdev=1043.76 00:33:54.970 clat percentiles (usec): 00:33:54.970 | 1.00th=[ 223], 5.00th=[ 281], 10.00th=[ 289], 20.00th=[ 302], 00:33:54.970 | 30.00th=[ 306], 40.00th=[ 314], 50.00th=[ 318], 60.00th=[ 330], 00:33:54.970 | 70.00th=[ 355], 80.00th=[ 383], 90.00th=[ 404], 95.00th=[ 420], 00:33:54.970 | 99.00th=[ 816], 99.50th=[ 857], 99.90th=[ 1434], 99.95th=[41157], 00:33:54.970 | 99.99th=[41157] 00:33:54.970 write: IOPS=1705, BW=6821KiB/s (6985kB/s)(6828KiB/1001msec); 0 zone resets 00:33:54.970 slat (nsec): min=8106, max=69846, avg=15379.43, stdev=6843.98 00:33:54.970 clat (usec): min=108, max=2089, avg=228.93, stdev=60.62 00:33:54.970 lat (usec): min=121, max=2103, avg=244.31, stdev=61.18 00:33:54.970 clat percentiles (usec): 00:33:54.970 | 1.00th=[ 161], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 188], 00:33:54.970 | 30.00th=[ 215], 40.00th=[ 227], 50.00th=[ 235], 60.00th=[ 241], 00:33:54.970 | 70.00th=[ 247], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 273], 00:33:54.970 | 99.00th=[ 310], 99.50th=[ 363], 99.90th=[ 807], 99.95th=[ 2089], 00:33:54.970 | 99.99th=[ 2089] 00:33:54.970 bw ( KiB/s): min= 8192, max= 8192, per=22.59%, avg=8192.00, stdev= 0.00, samples=1 00:33:54.970 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:33:54.970 lat (usec) : 250=40.46%, 500=58.28%, 750=0.28%, 1000=0.86% 00:33:54.970 lat (msec) : 2=0.06%, 4=0.03%, 50=0.03% 00:33:54.970 cpu : usr=0.80%, sys=3.10%, ctx=3244, majf=0, minf=15 00:33:54.970 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:54.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.970 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.970 issued rwts: total=1536,1707,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.970 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:54.970 00:33:54.970 Run status group 0 (all jobs): 00:33:54.970 READ: bw=32.7MiB/s (34.2MB/s), 6138KiB/s-12.7MiB/s (6285kB/s-13.3MB/s), io=32.7MiB (34.3MB), run=1001-1001msec 00:33:54.970 WRITE: bw=35.4MiB/s (37.1MB/s), 6805KiB/s-14.0MiB/s (6969kB/s-14.7MB/s), io=35.5MiB (37.2MB), run=1001-1001msec 00:33:54.970 00:33:54.970 Disk stats (read/write): 00:33:54.970 nvme0n1: ios=1613/2048, merge=0/0, ticks=417/417, in_queue=834, util=88.26% 00:33:54.970 nvme0n2: ios=1376/1536, merge=0/0, ticks=496/345, in_queue=841, util=89.19% 00:33:54.970 nvme0n3: ios=2899/3072, merge=0/0, ticks=505/389, in_queue=894, util=90.32% 00:33:54.970 nvme0n4: ios=1348/1536, merge=0/0, ticks=480/338, in_queue=818, util=89.88% 00:33:54.970 06:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:33:54.970 [global] 00:33:54.970 thread=1 00:33:54.970 invalidate=1 00:33:54.970 rw=randwrite 00:33:54.970 time_based=1 00:33:54.970 runtime=1 00:33:54.970 ioengine=libaio 00:33:54.970 direct=1 00:33:54.970 bs=4096 00:33:54.970 iodepth=1 00:33:54.970 norandommap=0 00:33:54.970 numjobs=1 00:33:54.970 00:33:54.970 verify_dump=1 00:33:54.970 verify_backlog=512 00:33:54.970 verify_state_save=0 00:33:54.970 do_verify=1 00:33:54.970 verify=crc32c-intel 00:33:54.970 [job0] 00:33:54.970 filename=/dev/nvme0n1 00:33:54.970 [job1] 00:33:54.970 filename=/dev/nvme0n2 00:33:54.970 [job2] 00:33:54.970 filename=/dev/nvme0n3 00:33:54.970 [job3] 00:33:54.970 filename=/dev/nvme0n4 00:33:54.970 Could not set queue depth (nvme0n1) 00:33:54.970 Could not set queue depth (nvme0n2) 00:33:54.970 Could not set queue depth (nvme0n3) 00:33:54.970 Could not set queue depth (nvme0n4) 00:33:55.228 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:55.229 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:55.229 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:55.229 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:55.229 fio-3.35 00:33:55.229 Starting 4 threads 00:33:56.164 00:33:56.164 job0: (groupid=0, jobs=1): err= 0: pid=125294: Wed Nov 27 06:36:15 2024 00:33:56.164 read: IOPS=3854, BW=15.1MiB/s (15.8MB/s)(15.1MiB/1001msec) 00:33:56.164 slat (nsec): min=6010, max=67191, avg=7706.45, stdev=4010.82 00:33:56.164 clat (usec): min=99, max=1554, avg=129.24, stdev=28.84 00:33:56.164 lat (usec): min=105, max=1562, avg=136.95, stdev=29.67 00:33:56.164 clat percentiles (usec): 00:33:56.164 | 1.00th=[ 106], 5.00th=[ 112], 10.00th=[ 114], 20.00th=[ 118], 00:33:56.164 | 30.00th=[ 121], 40.00th=[ 124], 50.00th=[ 126], 60.00th=[ 130], 00:33:56.164 | 70.00th=[ 135], 80.00th=[ 139], 90.00th=[ 147], 95.00th=[ 155], 00:33:56.164 | 99.00th=[ 178], 99.50th=[ 194], 99.90th=[ 343], 99.95th=[ 570], 00:33:56.164 | 99.99th=[ 1549] 00:33:56.164 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:33:56.164 slat (nsec): min=9473, max=92460, avg=12074.30, stdev=6581.72 00:33:56.164 clat (usec): min=74, max=448, avg=101.24, stdev=17.00 00:33:56.164 lat (usec): min=84, max=462, avg=113.31, stdev=19.81 00:33:56.164 clat percentiles (usec): 00:33:56.164 | 1.00th=[ 80], 5.00th=[ 84], 10.00th=[ 87], 20.00th=[ 90], 00:33:56.164 | 30.00th=[ 93], 40.00th=[ 95], 50.00th=[ 98], 60.00th=[ 101], 00:33:56.164 | 70.00th=[ 105], 80.00th=[ 111], 90.00th=[ 121], 95.00th=[ 131], 00:33:56.164 | 99.00th=[ 155], 99.50th=[ 165], 99.90th=[ 219], 99.95th=[ 343], 00:33:56.164 | 99.99th=[ 449] 00:33:56.164 bw ( KiB/s): min=16416, max=16416, per=32.73%, avg=16416.00, stdev= 0.00, samples=1 00:33:56.164 iops : min= 4104, max= 4104, avg=4104.00, stdev= 0.00, samples=1 00:33:56.164 lat (usec) : 100=29.58%, 250=70.29%, 500=0.10%, 750=0.01% 00:33:56.165 lat (msec) : 2=0.01% 00:33:56.165 cpu : usr=1.10%, sys=6.30%, ctx=7954, majf=0, minf=11 00:33:56.165 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:56.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.165 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.165 issued rwts: total=3858,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:56.165 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:56.165 job1: (groupid=0, jobs=1): err= 0: pid=125295: Wed Nov 27 06:36:15 2024 00:33:56.165 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:33:56.165 slat (nsec): min=5239, max=26032, avg=7416.96, stdev=1473.59 00:33:56.165 clat (usec): min=141, max=524, avg=243.52, stdev=38.50 00:33:56.165 lat (usec): min=147, max=532, avg=250.94, stdev=38.85 00:33:56.165 clat percentiles (usec): 00:33:56.165 | 1.00th=[ 151], 5.00th=[ 161], 10.00th=[ 217], 20.00th=[ 227], 00:33:56.165 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 245], 00:33:56.165 | 70.00th=[ 251], 80.00th=[ 265], 90.00th=[ 297], 95.00th=[ 314], 00:33:56.165 | 99.00th=[ 343], 99.50th=[ 355], 99.90th=[ 412], 99.95th=[ 412], 00:33:56.165 | 99.99th=[ 529] 00:33:56.165 write: IOPS=2432, BW=9730KiB/s (9964kB/s)(9740KiB/1001msec); 0 zone resets 00:33:56.165 slat (nsec): min=7887, max=83889, avg=13010.39, stdev=5158.70 00:33:56.165 clat (usec): min=78, max=629, avg=185.14, stdev=26.47 00:33:56.165 lat (usec): min=114, max=641, avg=198.15, stdev=25.68 00:33:56.165 clat percentiles (usec): 00:33:56.165 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 172], 00:33:56.165 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:33:56.165 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 204], 95.00th=[ 217], 00:33:56.165 | 99.00th=[ 285], 99.50th=[ 318], 99.90th=[ 478], 99.95th=[ 537], 00:33:56.165 | 99.99th=[ 627] 00:33:56.165 bw ( KiB/s): min= 9880, max= 9880, per=19.70%, avg=9880.00, stdev= 0.00, samples=1 00:33:56.165 iops : min= 2470, max= 2470, avg=2470.00, stdev= 0.00, samples=1 00:33:56.165 lat (usec) : 100=0.02%, 250=84.12%, 500=15.79%, 750=0.07% 00:33:56.165 cpu : usr=0.40%, sys=3.80%, ctx=4483, majf=0, minf=11 00:33:56.165 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:56.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.165 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.165 issued rwts: total=2048,2435,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:56.165 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:56.165 job2: (groupid=0, jobs=1): err= 0: pid=125296: Wed Nov 27 06:36:15 2024 00:33:56.165 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:33:56.165 slat (nsec): min=4669, max=20171, avg=6757.92, stdev=1188.17 00:33:56.165 clat (usec): min=147, max=517, avg=244.10, stdev=36.14 00:33:56.165 lat (usec): min=155, max=525, avg=250.86, stdev=36.39 00:33:56.165 clat percentiles (usec): 00:33:56.165 | 1.00th=[ 159], 5.00th=[ 172], 10.00th=[ 219], 20.00th=[ 227], 00:33:56.165 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 247], 00:33:56.165 | 70.00th=[ 253], 80.00th=[ 265], 90.00th=[ 289], 95.00th=[ 310], 00:33:56.165 | 99.00th=[ 347], 99.50th=[ 371], 99.90th=[ 449], 99.95th=[ 506], 00:33:56.165 | 99.99th=[ 519] 00:33:56.165 write: IOPS=2432, BW=9730KiB/s (9964kB/s)(9740KiB/1001msec); 0 zone resets 00:33:56.165 slat (nsec): min=8911, max=91727, avg=14170.29, stdev=5104.18 00:33:56.165 clat (usec): min=114, max=661, avg=183.99, stdev=25.70 00:33:56.165 lat (usec): min=147, max=674, avg=198.16, stdev=25.27 00:33:56.165 clat percentiles (usec): 00:33:56.165 | 1.00th=[ 149], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:33:56.165 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 184], 00:33:56.165 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 204], 95.00th=[ 219], 00:33:56.165 | 99.00th=[ 281], 99.50th=[ 310], 99.90th=[ 461], 99.95th=[ 482], 00:33:56.165 | 99.99th=[ 660] 00:33:56.165 bw ( KiB/s): min= 9891, max= 9891, per=19.72%, avg=9891.00, stdev= 0.00, samples=1 00:33:56.165 iops : min= 2472, max= 2472, avg=2472.00, stdev= 0.00, samples=1 00:33:56.165 lat (usec) : 250=83.40%, 500=16.53%, 750=0.07% 00:33:56.165 cpu : usr=0.90%, sys=3.50%, ctx=4485, majf=0, minf=19 00:33:56.165 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:56.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.165 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.165 issued rwts: total=2048,2435,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:56.165 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:56.165 job3: (groupid=0, jobs=1): err= 0: pid=125297: Wed Nov 27 06:36:15 2024 00:33:56.165 read: IOPS=3116, BW=12.2MiB/s (12.8MB/s)(12.2MiB/1001msec) 00:33:56.165 slat (nsec): min=6878, max=27037, avg=8553.48, stdev=1512.17 00:33:56.165 clat (usec): min=124, max=313, avg=158.19, stdev=14.54 00:33:56.165 lat (usec): min=138, max=322, avg=166.74, stdev=14.59 00:33:56.165 clat percentiles (usec): 00:33:56.165 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 147], 00:33:56.165 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 159], 00:33:56.165 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 176], 95.00th=[ 182], 00:33:56.165 | 99.00th=[ 198], 99.50th=[ 212], 99.90th=[ 302], 99.95th=[ 306], 00:33:56.165 | 99.99th=[ 314] 00:33:56.165 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:33:56.165 slat (nsec): min=10184, max=93179, avg=14127.67, stdev=5831.07 00:33:56.165 clat (usec): min=86, max=350, avg=118.30, stdev=15.43 00:33:56.165 lat (usec): min=97, max=393, avg=132.42, stdev=17.69 00:33:56.165 clat percentiles (usec): 00:33:56.165 | 1.00th=[ 97], 5.00th=[ 102], 10.00th=[ 104], 20.00th=[ 108], 00:33:56.165 | 30.00th=[ 111], 40.00th=[ 114], 50.00th=[ 116], 60.00th=[ 120], 00:33:56.165 | 70.00th=[ 123], 80.00th=[ 127], 90.00th=[ 135], 95.00th=[ 143], 00:33:56.165 | 99.00th=[ 165], 99.50th=[ 176], 99.90th=[ 277], 99.95th=[ 306], 00:33:56.165 | 99.99th=[ 351] 00:33:56.165 bw ( KiB/s): min=14060, max=14060, per=28.04%, avg=14060.00, stdev= 0.00, samples=1 00:33:56.165 iops : min= 3515, max= 3515, avg=3515.00, stdev= 0.00, samples=1 00:33:56.165 lat (usec) : 100=1.85%, 250=97.93%, 500=0.22% 00:33:56.165 cpu : usr=0.80%, sys=5.40%, ctx=6705, majf=0, minf=7 00:33:56.165 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:56.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.165 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.165 issued rwts: total=3120,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:56.165 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:56.165 00:33:56.165 Run status group 0 (all jobs): 00:33:56.165 READ: bw=43.2MiB/s (45.3MB/s), 8184KiB/s-15.1MiB/s (8380kB/s-15.8MB/s), io=43.3MiB (45.4MB), run=1001-1001msec 00:33:56.165 WRITE: bw=49.0MiB/s (51.4MB/s), 9730KiB/s-16.0MiB/s (9964kB/s-16.8MB/s), io=49.0MiB (51.4MB), run=1001-1001msec 00:33:56.165 00:33:56.165 Disk stats (read/write): 00:33:56.165 nvme0n1: ios=3456/3584, merge=0/0, ticks=455/382, in_queue=837, util=89.38% 00:33:56.165 nvme0n2: ios=1934/2048, merge=0/0, ticks=470/367, in_queue=837, util=89.64% 00:33:56.165 nvme0n3: ios=1912/2048, merge=0/0, ticks=526/391, in_queue=917, util=90.55% 00:33:56.165 nvme0n4: ios=2795/3072, merge=0/0, ticks=449/394, in_queue=843, util=89.91% 00:33:56.165 06:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:33:56.424 [global] 00:33:56.424 thread=1 00:33:56.424 invalidate=1 00:33:56.424 rw=write 00:33:56.424 time_based=1 00:33:56.424 runtime=1 00:33:56.424 ioengine=libaio 00:33:56.424 direct=1 00:33:56.424 bs=4096 00:33:56.424 iodepth=128 00:33:56.424 norandommap=0 00:33:56.424 numjobs=1 00:33:56.424 00:33:56.424 verify_dump=1 00:33:56.424 verify_backlog=512 00:33:56.424 verify_state_save=0 00:33:56.424 do_verify=1 00:33:56.424 verify=crc32c-intel 00:33:56.424 [job0] 00:33:56.424 filename=/dev/nvme0n1 00:33:56.424 [job1] 00:33:56.424 filename=/dev/nvme0n2 00:33:56.424 [job2] 00:33:56.424 filename=/dev/nvme0n3 00:33:56.424 [job3] 00:33:56.424 filename=/dev/nvme0n4 00:33:56.424 Could not set queue depth (nvme0n1) 00:33:56.424 Could not set queue depth (nvme0n2) 00:33:56.424 Could not set queue depth (nvme0n3) 00:33:56.424 Could not set queue depth (nvme0n4) 00:33:56.424 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:56.424 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:56.424 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:56.424 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:56.424 fio-3.35 00:33:56.424 Starting 4 threads 00:33:57.799 00:33:57.799 job0: (groupid=0, jobs=1): err= 0: pid=125354: Wed Nov 27 06:36:17 2024 00:33:57.799 read: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec) 00:33:57.799 slat (usec): min=4, max=3119, avg=79.23, stdev=289.70 00:33:57.799 clat (usec): min=7897, max=12178, avg=10282.80, stdev=611.29 00:33:57.799 lat (usec): min=8033, max=12189, avg=10362.03, stdev=567.81 00:33:57.799 clat percentiles (usec): 00:33:57.799 | 1.00th=[ 8455], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9896], 00:33:57.799 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10421], 60.00th=[10421], 00:33:57.799 | 70.00th=[10552], 80.00th=[10683], 90.00th=[10945], 95.00th=[11207], 00:33:57.799 | 99.00th=[11731], 99.50th=[11863], 99.90th=[11994], 99.95th=[12125], 00:33:57.799 | 99.99th=[12125] 00:33:57.799 write: IOPS=6387, BW=24.9MiB/s (26.2MB/s)(25.0MiB/1002msec); 0 zone resets 00:33:57.799 slat (usec): min=6, max=2474, avg=76.03, stdev=324.52 00:33:57.799 clat (usec): min=1041, max=12062, avg=9913.52, stdev=930.67 00:33:57.799 lat (usec): min=1055, max=12193, avg=9989.55, stdev=880.34 00:33:57.799 clat percentiles (usec): 00:33:57.799 | 1.00th=[ 5735], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[ 9634], 00:33:57.799 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10028], 00:33:57.799 | 70.00th=[10159], 80.00th=[10290], 90.00th=[10683], 95.00th=[11076], 00:33:57.799 | 99.00th=[11731], 99.50th=[11863], 99.90th=[11994], 99.95th=[11994], 00:33:57.799 | 99.99th=[12125] 00:33:57.799 bw ( KiB/s): min=25026, max=25208, per=33.16%, avg=25117.00, stdev=128.69, samples=2 00:33:57.799 iops : min= 6256, max= 6302, avg=6279.00, stdev=32.53, samples=2 00:33:57.799 lat (msec) : 2=0.12%, 4=0.26%, 10=38.09%, 20=61.54% 00:33:57.799 cpu : usr=3.00%, sys=9.19%, ctx=684, majf=0, minf=11 00:33:57.799 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:33:57.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.799 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:57.799 issued rwts: total=6144,6400,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.799 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:57.799 job1: (groupid=0, jobs=1): err= 0: pid=125355: Wed Nov 27 06:36:17 2024 00:33:57.799 read: IOPS=3509, BW=13.7MiB/s (14.4MB/s)(13.8MiB/1003msec) 00:33:57.799 slat (usec): min=2, max=5362, avg=143.96, stdev=536.67 00:33:57.799 clat (usec): min=1824, max=23484, avg=17745.22, stdev=2251.08 00:33:57.799 lat (usec): min=3534, max=23494, avg=17889.18, stdev=2256.66 00:33:57.799 clat percentiles (usec): 00:33:57.799 | 1.00th=[ 8848], 5.00th=[14877], 10.00th=[15401], 20.00th=[16319], 00:33:57.799 | 30.00th=[16909], 40.00th=[17433], 50.00th=[17957], 60.00th=[18220], 00:33:57.799 | 70.00th=[18744], 80.00th=[19268], 90.00th=[20055], 95.00th=[20841], 00:33:57.799 | 99.00th=[22414], 99.50th=[22938], 99.90th=[23462], 99.95th=[23462], 00:33:57.799 | 99.99th=[23462] 00:33:57.799 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:33:57.799 slat (usec): min=6, max=4473, avg=133.85, stdev=498.71 00:33:57.799 clat (usec): min=11722, max=23379, avg=17802.98, stdev=1441.85 00:33:57.799 lat (usec): min=12502, max=23391, avg=17936.83, stdev=1365.99 00:33:57.799 clat percentiles (usec): 00:33:57.799 | 1.00th=[13304], 5.00th=[15664], 10.00th=[16450], 20.00th=[16909], 00:33:57.799 | 30.00th=[17171], 40.00th=[17433], 50.00th=[17695], 60.00th=[17957], 00:33:57.799 | 70.00th=[18220], 80.00th=[18482], 90.00th=[19792], 95.00th=[20579], 00:33:57.799 | 99.00th=[22152], 99.50th=[22676], 99.90th=[23200], 99.95th=[23462], 00:33:57.799 | 99.99th=[23462] 00:33:57.799 bw ( KiB/s): min=13000, max=15703, per=18.94%, avg=14351.50, stdev=1911.31, samples=2 00:33:57.799 iops : min= 3250, max= 3925, avg=3587.50, stdev=477.30, samples=2 00:33:57.799 lat (msec) : 2=0.01%, 4=0.14%, 10=0.55%, 20=89.89%, 50=9.40% 00:33:57.799 cpu : usr=1.50%, sys=5.89%, ctx=1113, majf=0, minf=9 00:33:57.799 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:33:57.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.799 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:57.800 issued rwts: total=3520,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.800 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:57.800 job2: (groupid=0, jobs=1): err= 0: pid=125356: Wed Nov 27 06:36:17 2024 00:33:57.800 read: IOPS=3529, BW=13.8MiB/s (14.5MB/s)(13.8MiB/1002msec) 00:33:57.800 slat (usec): min=6, max=5391, avg=143.71, stdev=538.13 00:33:57.800 clat (usec): min=479, max=22538, avg=17783.88, stdev=2293.17 00:33:57.800 lat (usec): min=1847, max=23095, avg=17927.58, stdev=2273.89 00:33:57.800 clat percentiles (usec): 00:33:57.800 | 1.00th=[ 5866], 5.00th=[15401], 10.00th=[16057], 20.00th=[16712], 00:33:57.800 | 30.00th=[17171], 40.00th=[17695], 50.00th=[18220], 60.00th=[18482], 00:33:57.800 | 70.00th=[18744], 80.00th=[19268], 90.00th=[19792], 95.00th=[20317], 00:33:57.800 | 99.00th=[20841], 99.50th=[21365], 99.90th=[22152], 99.95th=[22414], 00:33:57.800 | 99.99th=[22414] 00:33:57.800 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:33:57.800 slat (usec): min=6, max=4798, avg=133.56, stdev=498.21 00:33:57.800 clat (usec): min=12315, max=21683, avg=17714.53, stdev=1239.22 00:33:57.800 lat (usec): min=12554, max=21816, avg=17848.09, stdev=1164.77 00:33:57.800 clat percentiles (usec): 00:33:57.800 | 1.00th=[13435], 5.00th=[15926], 10.00th=[16450], 20.00th=[16909], 00:33:57.800 | 30.00th=[17171], 40.00th=[17433], 50.00th=[17695], 60.00th=[17957], 00:33:57.800 | 70.00th=[18220], 80.00th=[18482], 90.00th=[19006], 95.00th=[20055], 00:33:57.800 | 99.00th=[21103], 99.50th=[21365], 99.90th=[21627], 99.95th=[21627], 00:33:57.800 | 99.99th=[21627] 00:33:57.800 bw ( KiB/s): min=13184, max=15457, per=18.90%, avg=14320.50, stdev=1607.25, samples=2 00:33:57.800 iops : min= 3296, max= 3864, avg=3580.00, stdev=401.64, samples=2 00:33:57.800 lat (usec) : 500=0.01% 00:33:57.800 lat (msec) : 2=0.20%, 4=0.21%, 10=0.65%, 20=93.41%, 50=5.52% 00:33:57.800 cpu : usr=1.50%, sys=6.09%, ctx=1109, majf=0, minf=9 00:33:57.800 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:33:57.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.800 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:57.800 issued rwts: total=3537,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.800 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:57.800 job3: (groupid=0, jobs=1): err= 0: pid=125357: Wed Nov 27 06:36:17 2024 00:33:57.800 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:33:57.800 slat (usec): min=4, max=4239, avg=94.50, stdev=366.93 00:33:57.800 clat (usec): min=9232, max=17300, avg=12372.91, stdev=955.32 00:33:57.800 lat (usec): min=9863, max=17452, avg=12467.41, stdev=927.60 00:33:57.800 clat percentiles (usec): 00:33:57.800 | 1.00th=[10290], 5.00th=[10945], 10.00th=[11207], 20.00th=[11731], 00:33:57.800 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12387], 60.00th=[12518], 00:33:57.800 | 70.00th=[12649], 80.00th=[12780], 90.00th=[13435], 95.00th=[13960], 00:33:57.800 | 99.00th=[15795], 99.50th=[16909], 99.90th=[17171], 99.95th=[17171], 00:33:57.800 | 99.99th=[17171] 00:33:57.800 write: IOPS=5410, BW=21.1MiB/s (22.2MB/s)(21.2MiB/1003msec); 0 zone resets 00:33:57.800 slat (usec): min=8, max=2805, avg=91.27, stdev=408.21 00:33:57.800 clat (usec): min=248, max=14349, avg=11680.63, stdev=1085.72 00:33:57.800 lat (usec): min=2577, max=14609, avg=11771.90, stdev=1018.35 00:33:57.800 clat percentiles (usec): 00:33:57.800 | 1.00th=[ 5800], 5.00th=[10290], 10.00th=[11207], 20.00th=[11469], 00:33:57.800 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11863], 60.00th=[11863], 00:33:57.800 | 70.00th=[11994], 80.00th=[12125], 90.00th=[12387], 95.00th=[12911], 00:33:57.800 | 99.00th=[13698], 99.50th=[13829], 99.90th=[14091], 99.95th=[14091], 00:33:57.800 | 99.99th=[14353] 00:33:57.800 bw ( KiB/s): min=20912, max=21480, per=27.98%, avg=21196.00, stdev=401.64, samples=2 00:33:57.800 iops : min= 5228, max= 5370, avg=5299.00, stdev=100.41, samples=2 00:33:57.800 lat (usec) : 250=0.01% 00:33:57.800 lat (msec) : 4=0.30%, 10=2.28%, 20=97.41% 00:33:57.800 cpu : usr=3.19%, sys=7.78%, ctx=566, majf=0, minf=10 00:33:57.800 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:33:57.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.800 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:57.800 issued rwts: total=5120,5427,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.800 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:57.800 00:33:57.800 Run status group 0 (all jobs): 00:33:57.800 READ: bw=71.4MiB/s (74.8MB/s), 13.7MiB/s-24.0MiB/s (14.4MB/s-25.1MB/s), io=71.6MiB (75.0MB), run=1002-1003msec 00:33:57.800 WRITE: bw=74.0MiB/s (77.6MB/s), 14.0MiB/s-24.9MiB/s (14.6MB/s-26.2MB/s), io=74.2MiB (77.8MB), run=1002-1003msec 00:33:57.800 00:33:57.800 Disk stats (read/write): 00:33:57.800 nvme0n1: ios=5472/5632, merge=0/0, ticks=13698/12552, in_queue=26250, util=89.57% 00:33:57.800 nvme0n2: ios=3122/3216, merge=0/0, ticks=13766/12926, in_queue=26692, util=89.83% 00:33:57.800 nvme0n3: ios=3104/3190, merge=0/0, ticks=13802/13035, in_queue=26837, util=90.05% 00:33:57.800 nvme0n4: ios=4650/4655, merge=0/0, ticks=13966/12410, in_queue=26376, util=91.04% 00:33:57.800 06:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:33:57.800 [global] 00:33:57.800 thread=1 00:33:57.800 invalidate=1 00:33:57.800 rw=randwrite 00:33:57.800 time_based=1 00:33:57.800 runtime=1 00:33:57.800 ioengine=libaio 00:33:57.800 direct=1 00:33:57.800 bs=4096 00:33:57.800 iodepth=128 00:33:57.800 norandommap=0 00:33:57.800 numjobs=1 00:33:57.800 00:33:57.800 verify_dump=1 00:33:57.800 verify_backlog=512 00:33:57.800 verify_state_save=0 00:33:57.800 do_verify=1 00:33:57.800 verify=crc32c-intel 00:33:57.800 [job0] 00:33:57.800 filename=/dev/nvme0n1 00:33:57.800 [job1] 00:33:57.800 filename=/dev/nvme0n2 00:33:57.800 [job2] 00:33:57.800 filename=/dev/nvme0n3 00:33:57.800 [job3] 00:33:57.800 filename=/dev/nvme0n4 00:33:57.800 Could not set queue depth (nvme0n1) 00:33:57.800 Could not set queue depth (nvme0n2) 00:33:57.800 Could not set queue depth (nvme0n3) 00:33:57.800 Could not set queue depth (nvme0n4) 00:33:57.800 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:57.800 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:57.800 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:57.800 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:57.800 fio-3.35 00:33:57.800 Starting 4 threads 00:33:59.177 00:33:59.177 job0: (groupid=0, jobs=1): err= 0: pid=125412: Wed Nov 27 06:36:18 2024 00:33:59.177 read: IOPS=3251, BW=12.7MiB/s (13.3MB/s)(12.7MiB/1003msec) 00:33:59.177 slat (usec): min=4, max=8349, avg=156.45, stdev=818.39 00:33:59.177 clat (usec): min=1530, max=32562, avg=19819.94, stdev=5768.13 00:33:59.177 lat (usec): min=3004, max=32572, avg=19976.40, stdev=5766.33 00:33:59.177 clat percentiles (usec): 00:33:59.177 | 1.00th=[ 4883], 5.00th=[12649], 10.00th=[13829], 20.00th=[14222], 00:33:59.177 | 30.00th=[14484], 40.00th=[16712], 50.00th=[19792], 60.00th=[22152], 00:33:59.177 | 70.00th=[24249], 80.00th=[25297], 90.00th=[28181], 95.00th=[28705], 00:33:59.177 | 99.00th=[30802], 99.50th=[30802], 99.90th=[32637], 99.95th=[32637], 00:33:59.177 | 99.99th=[32637] 00:33:59.177 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:33:59.177 slat (usec): min=8, max=7584, avg=132.45, stdev=688.53 00:33:59.177 clat (usec): min=8342, max=27394, avg=17141.39, stdev=4909.39 00:33:59.177 lat (usec): min=10576, max=27768, avg=17273.84, stdev=4899.38 00:33:59.177 clat percentiles (usec): 00:33:59.177 | 1.00th=[ 9634], 5.00th=[10814], 10.00th=[10945], 20.00th=[11076], 00:33:59.177 | 30.00th=[12387], 40.00th=[16712], 50.00th=[17957], 60.00th=[19006], 00:33:59.177 | 70.00th=[21365], 80.00th=[21890], 90.00th=[23200], 95.00th=[23987], 00:33:59.177 | 99.00th=[26870], 99.50th=[27395], 99.90th=[27395], 99.95th=[27395], 00:33:59.177 | 99.99th=[27395] 00:33:59.177 bw ( KiB/s): min=13304, max=15368, per=23.40%, avg=14336.00, stdev=1459.47, samples=2 00:33:59.177 iops : min= 3326, max= 3842, avg=3584.00, stdev=364.87, samples=2 00:33:59.177 lat (msec) : 2=0.01%, 4=0.03%, 10=1.48%, 20=55.87%, 50=42.62% 00:33:59.177 cpu : usr=1.90%, sys=5.69%, ctx=242, majf=0, minf=15 00:33:59.177 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:33:59.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:59.177 issued rwts: total=3261,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:59.177 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:59.177 job1: (groupid=0, jobs=1): err= 0: pid=125413: Wed Nov 27 06:36:18 2024 00:33:59.177 read: IOPS=3794, BW=14.8MiB/s (15.5MB/s)(14.9MiB/1003msec) 00:33:59.177 slat (usec): min=2, max=4500, avg=129.18, stdev=490.64 00:33:59.177 clat (usec): min=1790, max=20853, avg=16117.64, stdev=1828.35 00:33:59.177 lat (usec): min=3985, max=20860, avg=16246.82, stdev=1819.82 00:33:59.177 clat percentiles (usec): 00:33:59.177 | 1.00th=[ 8160], 5.00th=[13698], 10.00th=[14484], 20.00th=[15139], 00:33:59.177 | 30.00th=[15664], 40.00th=[16057], 50.00th=[16319], 60.00th=[16581], 00:33:59.177 | 70.00th=[16909], 80.00th=[17433], 90.00th=[17695], 95.00th=[18220], 00:33:59.177 | 99.00th=[19006], 99.50th=[19530], 99.90th=[20579], 99.95th=[20579], 00:33:59.177 | 99.99th=[20841] 00:33:59.177 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:33:59.177 slat (usec): min=5, max=4155, avg=120.50, stdev=475.24 00:33:59.178 clat (usec): min=11024, max=20461, avg=15946.89, stdev=1220.38 00:33:59.178 lat (usec): min=11115, max=20472, avg=16067.39, stdev=1142.99 00:33:59.178 clat percentiles (usec): 00:33:59.178 | 1.00th=[12256], 5.00th=[13960], 10.00th=[14746], 20.00th=[15139], 00:33:59.178 | 30.00th=[15401], 40.00th=[15533], 50.00th=[15926], 60.00th=[16319], 00:33:59.178 | 70.00th=[16450], 80.00th=[16581], 90.00th=[17433], 95.00th=[18482], 00:33:59.178 | 99.00th=[19006], 99.50th=[19268], 99.90th=[20317], 99.95th=[20317], 00:33:59.178 | 99.99th=[20579] 00:33:59.178 bw ( KiB/s): min=16384, max=16384, per=26.75%, avg=16384.00, stdev= 0.00, samples=2 00:33:59.178 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:33:59.178 lat (msec) : 2=0.01%, 4=0.03%, 10=0.72%, 20=99.03%, 50=0.22% 00:33:59.178 cpu : usr=2.00%, sys=5.89%, ctx=1138, majf=0, minf=22 00:33:59.178 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:33:59.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.178 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:59.178 issued rwts: total=3806,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:59.178 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:59.178 job2: (groupid=0, jobs=1): err= 0: pid=125414: Wed Nov 27 06:36:18 2024 00:33:59.178 read: IOPS=3194, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1003msec) 00:33:59.178 slat (usec): min=3, max=7238, avg=136.72, stdev=666.79 00:33:59.178 clat (usec): min=1468, max=30578, avg=17063.86, stdev=3134.90 00:33:59.178 lat (usec): min=4301, max=33064, avg=17200.58, stdev=3179.82 00:33:59.178 clat percentiles (usec): 00:33:59.178 | 1.00th=[ 4621], 5.00th=[12911], 10.00th=[13960], 20.00th=[15008], 00:33:59.178 | 30.00th=[15664], 40.00th=[16319], 50.00th=[16581], 60.00th=[17695], 00:33:59.178 | 70.00th=[18482], 80.00th=[19268], 90.00th=[20579], 95.00th=[21627], 00:33:59.178 | 99.00th=[26346], 99.50th=[27919], 99.90th=[30540], 99.95th=[30540], 00:33:59.178 | 99.99th=[30540] 00:33:59.178 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:33:59.178 slat (usec): min=6, max=5682, avg=152.55, stdev=670.02 00:33:59.178 clat (usec): min=11010, max=44509, avg=20107.53, stdev=8054.38 00:33:59.178 lat (usec): min=11025, max=44531, avg=20260.07, stdev=8120.78 00:33:59.178 clat percentiles (usec): 00:33:59.178 | 1.00th=[11338], 5.00th=[12911], 10.00th=[12911], 20.00th=[14353], 00:33:59.178 | 30.00th=[14746], 40.00th=[16188], 50.00th=[16450], 60.00th=[16581], 00:33:59.178 | 70.00th=[21103], 80.00th=[28443], 90.00th=[33817], 95.00th=[37487], 00:33:59.178 | 99.00th=[41681], 99.50th=[43254], 99.90th=[44303], 99.95th=[44303], 00:33:59.178 | 99.99th=[44303] 00:33:59.178 bw ( KiB/s): min=12344, max=16328, per=23.40%, avg=14336.00, stdev=2817.11, samples=2 00:33:59.178 iops : min= 3086, max= 4082, avg=3584.00, stdev=704.28, samples=2 00:33:59.178 lat (msec) : 2=0.01%, 10=1.09%, 20=75.31%, 50=23.59% 00:33:59.178 cpu : usr=1.30%, sys=6.49%, ctx=337, majf=0, minf=11 00:33:59.178 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:33:59.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.178 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:59.178 issued rwts: total=3204,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:59.178 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:59.178 job3: (groupid=0, jobs=1): err= 0: pid=125416: Wed Nov 27 06:36:18 2024 00:33:59.178 read: IOPS=3799, BW=14.8MiB/s (15.6MB/s)(14.9MiB/1002msec) 00:33:59.178 slat (usec): min=5, max=4175, avg=129.83, stdev=504.69 00:33:59.178 clat (usec): min=400, max=21271, avg=15976.44, stdev=2163.58 00:33:59.178 lat (usec): min=1833, max=21281, avg=16106.27, stdev=2157.83 00:33:59.178 clat percentiles (usec): 00:33:59.178 | 1.00th=[ 5932], 5.00th=[13435], 10.00th=[13960], 20.00th=[14746], 00:33:59.178 | 30.00th=[15401], 40.00th=[15926], 50.00th=[16319], 60.00th=[16581], 00:33:59.178 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17957], 95.00th=[18744], 00:33:59.178 | 99.00th=[19792], 99.50th=[20317], 99.90th=[20579], 99.95th=[20579], 00:33:59.178 | 99.99th=[21365] 00:33:59.178 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:33:59.178 slat (usec): min=6, max=3991, avg=119.82, stdev=461.00 00:33:59.178 clat (usec): min=11112, max=20771, avg=15992.21, stdev=1212.01 00:33:59.178 lat (usec): min=11130, max=20782, avg=16112.03, stdev=1142.45 00:33:59.178 clat percentiles (usec): 00:33:59.178 | 1.00th=[12256], 5.00th=[14484], 10.00th=[15008], 20.00th=[15270], 00:33:59.178 | 30.00th=[15533], 40.00th=[15664], 50.00th=[15926], 60.00th=[16188], 00:33:59.178 | 70.00th=[16319], 80.00th=[16581], 90.00th=[17171], 95.00th=[18482], 00:33:59.178 | 99.00th=[19792], 99.50th=[20055], 99.90th=[20841], 99.95th=[20841], 00:33:59.178 | 99.99th=[20841] 00:33:59.178 bw ( KiB/s): min=16384, max=16384, per=26.75%, avg=16384.00, stdev= 0.00, samples=1 00:33:59.178 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:33:59.178 lat (usec) : 500=0.01% 00:33:59.178 lat (msec) : 2=0.20%, 4=0.04%, 10=0.57%, 20=98.60%, 50=0.58% 00:33:59.178 cpu : usr=1.50%, sys=6.49%, ctx=1145, majf=0, minf=5 00:33:59.178 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:33:59.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.178 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:59.178 issued rwts: total=3807,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:59.178 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:59.178 00:33:59.178 Run status group 0 (all jobs): 00:33:59.178 READ: bw=54.8MiB/s (57.5MB/s), 12.5MiB/s-14.8MiB/s (13.1MB/s-15.6MB/s), io=55.0MiB (57.7MB), run=1002-1003msec 00:33:59.178 WRITE: bw=59.8MiB/s (62.7MB/s), 14.0MiB/s-16.0MiB/s (14.6MB/s-16.7MB/s), io=60.0MiB (62.9MB), run=1002-1003msec 00:33:59.178 00:33:59.178 Disk stats (read/write): 00:33:59.178 nvme0n1: ios=3026/3072, merge=0/0, ticks=14728/11815, in_queue=26543, util=89.58% 00:33:59.178 nvme0n2: ios=3421/3584, merge=0/0, ticks=13688/12683, in_queue=26371, util=89.84% 00:33:59.178 nvme0n3: ios=2693/3072, merge=0/0, ticks=15048/20209, in_queue=35257, util=89.93% 00:33:59.178 nvme0n4: ios=3325/3584, merge=0/0, ticks=13458/12964, in_queue=26422, util=89.69% 00:33:59.178 06:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:33:59.178 06:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=125428 00:33:59.178 06:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:33:59.178 06:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:33:59.178 [global] 00:33:59.178 thread=1 00:33:59.178 invalidate=1 00:33:59.178 rw=read 00:33:59.178 time_based=1 00:33:59.178 runtime=10 00:33:59.178 ioengine=libaio 00:33:59.178 direct=1 00:33:59.178 bs=4096 00:33:59.178 iodepth=1 00:33:59.178 norandommap=1 00:33:59.178 numjobs=1 00:33:59.178 00:33:59.178 [job0] 00:33:59.178 filename=/dev/nvme0n1 00:33:59.178 [job1] 00:33:59.178 filename=/dev/nvme0n2 00:33:59.178 [job2] 00:33:59.178 filename=/dev/nvme0n3 00:33:59.178 [job3] 00:33:59.178 filename=/dev/nvme0n4 00:33:59.178 Could not set queue depth (nvme0n1) 00:33:59.178 Could not set queue depth (nvme0n2) 00:33:59.178 Could not set queue depth (nvme0n3) 00:33:59.178 Could not set queue depth (nvme0n4) 00:33:59.178 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:59.178 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:59.178 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:59.178 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:59.178 fio-3.35 00:33:59.178 Starting 4 threads 00:34:02.463 06:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:34:02.463 fio: pid=125478, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:02.463 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=75976704, buflen=4096 00:34:02.463 06:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:34:02.463 fio: pid=125477, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:02.463 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=58155008, buflen=4096 00:34:02.463 06:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:02.463 06:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:34:02.463 fio: pid=125475, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:02.463 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=22147072, buflen=4096 00:34:02.463 06:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:02.463 06:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:34:02.722 fio: pid=125476, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:02.722 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=1495040, buflen=4096 00:34:02.722 06:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:02.722 06:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:34:02.722 00:34:02.722 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=125475: Wed Nov 27 06:36:22 2024 00:34:02.722 read: IOPS=6553, BW=25.6MiB/s (26.8MB/s)(85.1MiB/3325msec) 00:34:02.722 slat (usec): min=3, max=17328, avg= 9.91, stdev=190.47 00:34:02.722 clat (usec): min=94, max=4119, avg=142.13, stdev=49.42 00:34:02.722 lat (usec): min=100, max=17474, avg=152.04, stdev=197.53 00:34:02.722 clat percentiles (usec): 00:34:02.722 | 1.00th=[ 110], 5.00th=[ 116], 10.00th=[ 119], 20.00th=[ 123], 00:34:02.722 | 30.00th=[ 127], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 139], 00:34:02.722 | 70.00th=[ 147], 80.00th=[ 157], 90.00th=[ 174], 95.00th=[ 188], 00:34:02.722 | 99.00th=[ 247], 99.50th=[ 277], 99.90th=[ 363], 99.95th=[ 482], 00:34:02.722 | 99.99th=[ 2474] 00:34:02.722 bw ( KiB/s): min=22584, max=28768, per=33.25%, avg=26765.33, stdev=2480.65, samples=6 00:34:02.722 iops : min= 5646, max= 7192, avg=6691.33, stdev=620.16, samples=6 00:34:02.722 lat (usec) : 100=0.08%, 250=99.01%, 500=0.87%, 750=0.01%, 1000=0.01% 00:34:02.722 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% 00:34:02.722 cpu : usr=0.45%, sys=4.18%, ctx=21801, majf=0, minf=1 00:34:02.722 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:02.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.722 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.722 issued rwts: total=21792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.722 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:02.722 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=125476: Wed Nov 27 06:36:22 2024 00:34:02.722 read: IOPS=4728, BW=18.5MiB/s (19.4MB/s)(65.4MiB/3542msec) 00:34:02.722 slat (usec): min=3, max=23259, avg=14.62, stdev=298.96 00:34:02.722 clat (usec): min=90, max=5001, avg=196.23, stdev=73.73 00:34:02.723 lat (usec): min=98, max=26808, avg=210.86, stdev=327.65 00:34:02.723 clat percentiles (usec): 00:34:02.723 | 1.00th=[ 103], 5.00th=[ 112], 10.00th=[ 129], 20.00th=[ 165], 00:34:02.723 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 200], 60.00th=[ 204], 00:34:02.723 | 70.00th=[ 210], 80.00th=[ 219], 90.00th=[ 235], 95.00th=[ 258], 00:34:02.723 | 99.00th=[ 306], 99.50th=[ 371], 99.90th=[ 750], 99.95th=[ 1172], 00:34:02.723 | 99.99th=[ 3752] 00:34:02.723 bw ( KiB/s): min=17440, max=19008, per=22.79%, avg=18346.67, stdev=509.28, samples=6 00:34:02.723 iops : min= 4360, max= 4752, avg=4586.67, stdev=127.32, samples=6 00:34:02.723 lat (usec) : 100=0.45%, 250=93.03%, 500=6.18%, 750=0.24%, 1000=0.02% 00:34:02.723 lat (msec) : 2=0.05%, 4=0.02%, 10=0.01% 00:34:02.723 cpu : usr=0.34%, sys=3.25%, ctx=16772, majf=0, minf=1 00:34:02.723 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:02.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.723 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.723 issued rwts: total=16750,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.723 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:02.723 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=125477: Wed Nov 27 06:36:22 2024 00:34:02.723 read: IOPS=4543, BW=17.7MiB/s (18.6MB/s)(55.5MiB/3125msec) 00:34:02.723 slat (usec): min=4, max=13809, avg=11.81, stdev=163.64 00:34:02.723 clat (usec): min=112, max=7934, avg=207.69, stdev=89.35 00:34:02.723 lat (usec): min=121, max=14051, avg=219.50, stdev=186.94 00:34:02.723 clat percentiles (usec): 00:34:02.723 | 1.00th=[ 133], 5.00th=[ 149], 10.00th=[ 178], 20.00th=[ 190], 00:34:02.723 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 208], 00:34:02.723 | 70.00th=[ 212], 80.00th=[ 221], 90.00th=[ 239], 95.00th=[ 262], 00:34:02.723 | 99.00th=[ 310], 99.50th=[ 379], 99.90th=[ 717], 99.95th=[ 1090], 00:34:02.723 | 99.99th=[ 3752] 00:34:02.723 bw ( KiB/s): min=16688, max=18400, per=22.29%, avg=17942.67, stdev=676.72, samples=6 00:34:02.723 iops : min= 4172, max= 4600, avg=4485.67, stdev=169.18, samples=6 00:34:02.723 lat (usec) : 250=92.93%, 500=6.73%, 750=0.25%, 1000=0.02% 00:34:02.723 lat (msec) : 2=0.04%, 4=0.01%, 10=0.01% 00:34:02.723 cpu : usr=0.38%, sys=3.30%, ctx=14202, majf=0, minf=2 00:34:02.723 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:02.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.723 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.723 issued rwts: total=14199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.723 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:02.723 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=125478: Wed Nov 27 06:36:22 2024 00:34:02.723 read: IOPS=6352, BW=24.8MiB/s (26.0MB/s)(72.5MiB/2920msec) 00:34:02.723 slat (nsec): min=4650, max=79085, avg=6210.29, stdev=1913.21 00:34:02.723 clat (usec): min=106, max=33240, avg=150.57, stdev=244.39 00:34:02.723 lat (usec): min=115, max=33247, avg=156.78, stdev=244.44 00:34:02.723 clat percentiles (usec): 00:34:02.723 | 1.00th=[ 122], 5.00th=[ 127], 10.00th=[ 130], 20.00th=[ 135], 00:34:02.723 | 30.00th=[ 137], 40.00th=[ 141], 50.00th=[ 143], 60.00th=[ 149], 00:34:02.723 | 70.00th=[ 155], 80.00th=[ 163], 90.00th=[ 176], 95.00th=[ 186], 00:34:02.723 | 99.00th=[ 212], 99.50th=[ 227], 99.90th=[ 375], 99.95th=[ 404], 00:34:02.723 | 99.99th=[ 1696] 00:34:02.723 bw ( KiB/s): min=24888, max=27000, per=32.40%, avg=26083.20, stdev=945.21, samples=5 00:34:02.723 iops : min= 6222, max= 6750, avg=6520.80, stdev=236.30, samples=5 00:34:02.723 lat (usec) : 250=99.70%, 500=0.27%, 750=0.01% 00:34:02.723 lat (msec) : 2=0.02%, 50=0.01% 00:34:02.723 cpu : usr=0.38%, sys=3.53%, ctx=18552, majf=0, minf=2 00:34:02.723 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:02.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.723 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.723 issued rwts: total=18550,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.723 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:02.723 00:34:02.723 Run status group 0 (all jobs): 00:34:02.723 READ: bw=78.6MiB/s (82.4MB/s), 17.7MiB/s-25.6MiB/s (18.6MB/s-26.8MB/s), io=278MiB (292MB), run=2920-3542msec 00:34:02.723 00:34:02.723 Disk stats (read/write): 00:34:02.723 nvme0n1: ios=20713/0, merge=0/0, ticks=2962/0, in_queue=2962, util=95.41% 00:34:02.723 nvme0n2: ios=15589/0, merge=0/0, ticks=3177/0, in_queue=3177, util=94.50% 00:34:02.723 nvme0n3: ios=13022/0, merge=0/0, ticks=2771/0, in_queue=2771, util=96.45% 00:34:02.723 nvme0n4: ios=18514/0, merge=0/0, ticks=2772/0, in_queue=2772, util=96.71% 00:34:02.982 06:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:02.982 06:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:34:03.240 06:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:03.240 06:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:34:03.499 06:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:03.499 06:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:34:03.499 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:03.499 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:34:03.757 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:34:03.757 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 125428 00:34:03.757 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:34:03.757 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:03.757 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:03.757 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:03.757 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:34:03.757 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:03.758 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:03.758 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:03.758 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:03.758 nvmf hotplug test: fio failed as expected 00:34:03.758 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:34:03.758 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:34:03.758 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:34:03.758 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:04.017 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:34:04.017 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:34:04.017 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:34:04.017 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:34:04.017 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:34:04.017 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:04.017 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:34:04.017 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:04.017 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:34:04.017 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:04.017 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:04.017 rmmod nvme_tcp 00:34:04.017 rmmod nvme_fabrics 00:34:04.017 rmmod nvme_keyring 00:34:04.017 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:04.017 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:34:04.017 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:34:04.017 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 124955 ']' 00:34:04.017 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 124955 00:34:04.017 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 124955 ']' 00:34:04.017 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 124955 00:34:04.017 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:34:04.017 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:04.017 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 124955 00:34:04.017 killing process with pid 124955 00:34:04.017 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:04.017 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:04.017 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 124955' 00:34:04.017 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 124955 00:34:04.017 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 124955 00:34:04.276 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:04.276 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:04.276 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:04.276 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:34:04.276 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:34:04.276 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:04.276 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:04.276 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:04.276 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:34:04.276 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:34:04.276 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:34:04.276 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:34:04.276 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:34:04.276 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:34:04.276 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:34:04.276 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:34:04.276 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:34:04.276 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:34:04.276 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:34:04.535 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:34:04.535 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:04.535 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:04.535 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:34:04.535 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:04.535 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:04.535 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:04.535 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:34:04.535 00:34:04.535 real 0m18.062s 00:34:04.535 user 0m55.274s 00:34:04.535 sys 0m7.644s 00:34:04.535 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:04.535 ************************************ 00:34:04.535 END TEST nvmf_fio_target 00:34:04.535 ************************************ 00:34:04.535 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:04.535 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:04.535 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:04.535 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:04.535 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:04.535 ************************************ 00:34:04.535 START TEST nvmf_bdevio 00:34:04.535 ************************************ 00:34:04.535 06:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:04.535 * Looking for test storage... 00:34:04.535 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:34:04.535 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:04.535 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:04.535 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:34:04.797 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:04.797 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:04.797 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:04.797 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:04.797 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:34:04.797 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:34:04.797 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:34:04.797 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:34:04.797 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:34:04.797 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:34:04.797 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:34:04.797 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:04.797 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:34:04.797 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:34:04.797 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:04.797 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:04.797 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:34:04.797 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:34:04.797 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:04.797 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:34:04.797 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:34:04.797 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:34:04.797 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:34:04.797 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:04.797 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:34:04.797 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:34:04.797 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:04.797 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:04.797 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:34:04.797 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:04.797 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:04.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.797 --rc genhtml_branch_coverage=1 00:34:04.797 --rc genhtml_function_coverage=1 00:34:04.797 --rc genhtml_legend=1 00:34:04.797 --rc geninfo_all_blocks=1 00:34:04.797 --rc geninfo_unexecuted_blocks=1 00:34:04.797 00:34:04.797 ' 00:34:04.797 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:04.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.797 --rc genhtml_branch_coverage=1 00:34:04.797 --rc genhtml_function_coverage=1 00:34:04.797 --rc genhtml_legend=1 00:34:04.797 --rc geninfo_all_blocks=1 00:34:04.797 --rc geninfo_unexecuted_blocks=1 00:34:04.798 00:34:04.798 ' 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:04.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.798 --rc genhtml_branch_coverage=1 00:34:04.798 --rc genhtml_function_coverage=1 00:34:04.798 --rc genhtml_legend=1 00:34:04.798 --rc geninfo_all_blocks=1 00:34:04.798 --rc geninfo_unexecuted_blocks=1 00:34:04.798 00:34:04.798 ' 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:04.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.798 --rc genhtml_branch_coverage=1 00:34:04.798 --rc genhtml_function_coverage=1 00:34:04.798 --rc genhtml_legend=1 00:34:04.798 --rc geninfo_all_blocks=1 00:34:04.798 --rc geninfo_unexecuted_blocks=1 00:34:04.798 00:34:04.798 ' 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:34:04.798 Cannot find device "nvmf_init_br" 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:34:04.798 Cannot find device "nvmf_init_br2" 00:34:04.798 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:34:04.799 Cannot find device "nvmf_tgt_br" 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:34:04.799 Cannot find device "nvmf_tgt_br2" 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:34:04.799 Cannot find device "nvmf_init_br" 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:34:04.799 Cannot find device "nvmf_init_br2" 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:34:04.799 Cannot find device "nvmf_tgt_br" 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:34:04.799 Cannot find device "nvmf_tgt_br2" 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:34:04.799 Cannot find device "nvmf_br" 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:34:04.799 Cannot find device "nvmf_init_if" 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:34:04.799 Cannot find device "nvmf_init_if2" 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:04.799 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:04.799 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:04.799 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:34:05.060 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:34:05.060 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:34:05.060 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:34:05.060 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:05.060 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:05.060 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:05.060 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:34:05.060 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:34:05.060 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:34:05.060 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:05.060 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:34:05.060 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:34:05.060 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:05.060 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:34:05.060 00:34:05.060 --- 10.0.0.3 ping statistics --- 00:34:05.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:05.060 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:34:05.060 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:34:05.060 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:34:05.060 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms 00:34:05.060 00:34:05.060 --- 10.0.0.4 ping statistics --- 00:34:05.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:05.060 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:34:05.060 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:05.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:05.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.013 ms 00:34:05.060 00:34:05.060 --- 10.0.0.1 ping statistics --- 00:34:05.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:05.060 rtt min/avg/max/mdev = 0.013/0.013/0.013/0.000 ms 00:34:05.060 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:34:05.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:05.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.030 ms 00:34:05.060 00:34:05.060 --- 10.0.0.2 ping statistics --- 00:34:05.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:05.060 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:34:05.060 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:05.060 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:34:05.060 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:05.060 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:05.060 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:05.060 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:05.060 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:05.060 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:05.060 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:05.060 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:34:05.060 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:05.060 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:05.060 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:05.060 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=125845 00:34:05.060 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 125845 00:34:05.060 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 125845 ']' 00:34:05.060 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:34:05.060 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:05.060 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:05.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:05.060 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:05.060 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:05.060 06:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:05.060 [2024-11-27 06:36:24.437936] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:05.060 [2024-11-27 06:36:24.438512] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:34:05.060 [2024-11-27 06:36:24.438541] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:05.060 [2024-11-27 06:36:24.550857] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:05.060 [2024-11-27 06:36:24.576003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:05.322 [2024-11-27 06:36:24.604163] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:05.322 [2024-11-27 06:36:24.604198] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:05.322 [2024-11-27 06:36:24.604204] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:05.322 [2024-11-27 06:36:24.604210] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:05.322 [2024-11-27 06:36:24.604215] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:05.322 [2024-11-27 06:36:24.605435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:05.322 [2024-11-27 06:36:24.605806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:05.322 [2024-11-27 06:36:24.606515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:05.322 [2024-11-27 06:36:24.606660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:05.322 [2024-11-27 06:36:24.686531] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:05.322 [2024-11-27 06:36:24.687497] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:05.322 [2024-11-27 06:36:24.688339] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:05.322 [2024-11-27 06:36:24.688502] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:05.322 [2024-11-27 06:36:24.688541] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:05.909 06:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:05.909 06:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:34:05.909 06:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:05.909 06:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:05.909 06:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:05.910 06:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:05.910 06:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:05.910 06:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.910 06:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:05.910 [2024-11-27 06:36:25.391842] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:05.910 06:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.910 06:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:05.910 06:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.910 06:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:06.217 Malloc0 00:34:06.217 06:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.217 06:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:06.217 06:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.217 06:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:06.217 06:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.217 06:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:06.217 06:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.217 06:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:06.217 06:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.217 06:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:34:06.217 06:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.217 06:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:06.217 [2024-11-27 06:36:25.472317] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:34:06.217 06:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.217 06:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:34:06.217 06:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:34:06.217 06:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:34:06.217 06:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:34:06.217 06:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:06.217 06:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:06.217 { 00:34:06.217 "params": { 00:34:06.217 "name": "Nvme$subsystem", 00:34:06.217 "trtype": "$TEST_TRANSPORT", 00:34:06.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:06.217 "adrfam": "ipv4", 00:34:06.217 "trsvcid": "$NVMF_PORT", 00:34:06.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:06.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:06.217 "hdgst": ${hdgst:-false}, 00:34:06.217 "ddgst": ${ddgst:-false} 00:34:06.217 }, 00:34:06.217 "method": "bdev_nvme_attach_controller" 00:34:06.217 } 00:34:06.217 EOF 00:34:06.217 )") 00:34:06.217 06:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:34:06.217 06:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:34:06.217 06:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:34:06.217 06:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:06.217 "params": { 00:34:06.217 "name": "Nvme1", 00:34:06.217 "trtype": "tcp", 00:34:06.217 "traddr": "10.0.0.3", 00:34:06.217 "adrfam": "ipv4", 00:34:06.217 "trsvcid": "4420", 00:34:06.217 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:06.217 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:06.217 "hdgst": false, 00:34:06.217 "ddgst": false 00:34:06.217 }, 00:34:06.217 "method": "bdev_nvme_attach_controller" 00:34:06.217 }' 00:34:06.217 [2024-11-27 06:36:25.513504] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:34:06.217 [2024-11-27 06:36:25.513564] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125898 ] 00:34:06.217 [2024-11-27 06:36:25.628449] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:06.217 [2024-11-27 06:36:25.652384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:06.217 [2024-11-27 06:36:25.679147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:06.217 [2024-11-27 06:36:25.679293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:06.217 [2024-11-27 06:36:25.679414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:06.480 I/O targets: 00:34:06.480 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:34:06.480 00:34:06.480 00:34:06.480 CUnit - A unit testing framework for C - Version 2.1-3 00:34:06.480 http://cunit.sourceforge.net/ 00:34:06.480 00:34:06.480 00:34:06.480 Suite: bdevio tests on: Nvme1n1 00:34:06.480 Test: blockdev write read block ...passed 00:34:06.480 Test: blockdev write zeroes read block ...passed 00:34:06.480 Test: blockdev write zeroes read no split ...passed 00:34:06.480 Test: blockdev write zeroes read split ...passed 00:34:06.480 Test: blockdev write zeroes read split partial ...passed 00:34:06.480 Test: blockdev reset ...[2024-11-27 06:36:25.931960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:34:06.480 [2024-11-27 06:36:25.932045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x247e340 (9): Bad file descriptor 00:34:06.480 [2024-11-27 06:36:25.934714] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:34:06.480 passed 00:34:06.480 Test: blockdev write read 8 blocks ...passed 00:34:06.480 Test: blockdev write read size > 128k ...passed 00:34:06.480 Test: blockdev write read invalid size ...passed 00:34:06.480 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:06.480 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:06.480 Test: blockdev write read max offset ...passed 00:34:06.741 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:06.741 Test: blockdev writev readv 8 blocks ...passed 00:34:06.741 Test: blockdev writev readv 30 x 1block ...passed 00:34:06.741 Test: blockdev writev readv block ...passed 00:34:06.741 Test: blockdev writev readv size > 128k ...passed 00:34:06.741 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:06.741 Test: blockdev comparev and writev ...[2024-11-27 06:36:26.106402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:06.741 [2024-11-27 06:36:26.106432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.741 [2024-11-27 06:36:26.106442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:06.741 [2024-11-27 06:36:26.106448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:06.741 [2024-11-27 06:36:26.106805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:06.741 [2024-11-27 06:36:26.106820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:06.741 [2024-11-27 06:36:26.106829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:06.741 [2024-11-27 06:36:26.106833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:06.741 [2024-11-27 06:36:26.107236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:06.741 [2024-11-27 06:36:26.107250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:06.741 [2024-11-27 06:36:26.107259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:06.741 [2024-11-27 06:36:26.107263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:06.741 [2024-11-27 06:36:26.107522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:06.741 [2024-11-27 06:36:26.107535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:06.741 [2024-11-27 06:36:26.107544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:06.741 [2024-11-27 06:36:26.107549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:06.741 passed 00:34:06.741 Test: blockdev nvme passthru rw ...passed 00:34:06.741 Test: blockdev nvme passthru vendor specific ...[2024-11-27 06:36:26.191113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:06.741 [2024-11-27 06:36:26.191128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:06.741 [2024-11-27 06:36:26.191201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:06.741 [2024-11-27 06:36:26.191210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:06.741 [2024-11-27 06:36:26.191275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:06.741 [2024-11-27 06:36:26.191284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:06.741 [2024-11-27 06:36:26.191356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:06.741 [2024-11-27 06:36:26.191365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:06.741 passed 00:34:06.741 Test: blockdev nvme admin passthru ...passed 00:34:06.741 Test: blockdev copy ...passed 00:34:06.741 00:34:06.741 Run Summary: Type Total Ran Passed Failed Inactive 00:34:06.741 suites 1 1 n/a 0 0 00:34:06.741 tests 23 23 23 0 0 00:34:06.741 asserts 152 152 152 0 n/a 00:34:06.741 00:34:06.741 Elapsed time = 0.837 seconds 00:34:07.002 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:07.002 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.002 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:07.002 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.002 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:34:07.002 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:34:07.002 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:07.002 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:34:07.002 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:07.002 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:34:07.002 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:07.002 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:07.002 rmmod nvme_tcp 00:34:07.002 rmmod nvme_fabrics 00:34:07.002 rmmod nvme_keyring 00:34:07.002 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:07.002 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:34:07.002 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:34:07.002 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 125845 ']' 00:34:07.002 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 125845 00:34:07.002 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 125845 ']' 00:34:07.002 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 125845 00:34:07.002 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:34:07.002 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:07.002 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 125845 00:34:07.002 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:34:07.002 killing process with pid 125845 00:34:07.002 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:34:07.002 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 125845' 00:34:07.002 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 125845 00:34:07.002 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 125845 00:34:07.265 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:07.265 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:07.265 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:07.265 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:34:07.265 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:34:07.265 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:34:07.265 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:07.265 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:07.265 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:34:07.265 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:34:07.265 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:34:07.265 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:34:07.265 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:34:07.265 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:34:07.265 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:34:07.265 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:34:07.265 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:34:07.265 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:34:07.528 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:34:07.528 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:34:07.528 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:07.528 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:07.528 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:34:07.528 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:07.528 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:07.528 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:07.528 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:34:07.528 00:34:07.528 real 0m2.996s 00:34:07.528 user 0m6.169s 00:34:07.528 sys 0m0.872s 00:34:07.528 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:07.528 ************************************ 00:34:07.528 END TEST nvmf_bdevio 00:34:07.528 ************************************ 00:34:07.528 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:07.528 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:34:07.528 00:34:07.528 real 3m24.602s 00:34:07.528 user 9m4.577s 00:34:07.528 sys 1m0.578s 00:34:07.528 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:07.528 ************************************ 00:34:07.528 END TEST nvmf_target_core_interrupt_mode 00:34:07.528 ************************************ 00:34:07.528 06:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:07.528 06:36:27 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:07.528 06:36:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:07.528 06:36:27 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:07.528 06:36:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:07.528 ************************************ 00:34:07.528 START TEST nvmf_interrupt 00:34:07.528 ************************************ 00:34:07.528 06:36:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:07.792 * Looking for test storage... 00:34:07.792 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:07.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.792 --rc genhtml_branch_coverage=1 00:34:07.792 --rc genhtml_function_coverage=1 00:34:07.792 --rc genhtml_legend=1 00:34:07.792 --rc geninfo_all_blocks=1 00:34:07.792 --rc geninfo_unexecuted_blocks=1 00:34:07.792 00:34:07.792 ' 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:07.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.792 --rc genhtml_branch_coverage=1 00:34:07.792 --rc genhtml_function_coverage=1 00:34:07.792 --rc genhtml_legend=1 00:34:07.792 --rc geninfo_all_blocks=1 00:34:07.792 --rc geninfo_unexecuted_blocks=1 00:34:07.792 00:34:07.792 ' 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:07.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.792 --rc genhtml_branch_coverage=1 00:34:07.792 --rc genhtml_function_coverage=1 00:34:07.792 --rc genhtml_legend=1 00:34:07.792 --rc geninfo_all_blocks=1 00:34:07.792 --rc geninfo_unexecuted_blocks=1 00:34:07.792 00:34:07.792 ' 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:07.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.792 --rc genhtml_branch_coverage=1 00:34:07.792 --rc genhtml_function_coverage=1 00:34:07.792 --rc genhtml_legend=1 00:34:07.792 --rc geninfo_all_blocks=1 00:34:07.792 --rc geninfo_unexecuted_blocks=1 00:34:07.792 00:34:07.792 ' 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:07.792 06:36:27 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@460 -- # nvmf_veth_init 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:34:07.793 Cannot find device "nvmf_init_br" 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # true 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:34:07.793 Cannot find device "nvmf_init_br2" 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # true 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:34:07.793 Cannot find device "nvmf_tgt_br" 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # true 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:34:07.793 Cannot find device "nvmf_tgt_br2" 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # true 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:34:07.793 Cannot find device "nvmf_init_br" 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # true 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:34:07.793 Cannot find device "nvmf_init_br2" 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # true 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:34:07.793 Cannot find device "nvmf_tgt_br" 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # true 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:34:07.793 Cannot find device "nvmf_tgt_br2" 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # true 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:34:07.793 Cannot find device "nvmf_br" 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # true 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:34:07.793 Cannot find device "nvmf_init_if" 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # true 00:34:07.793 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:34:08.056 Cannot find device "nvmf_init_if2" 00:34:08.056 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # true 00:34:08.056 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:08.056 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:08.056 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # true 00:34:08.056 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:08.056 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:08.056 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # true 00:34:08.056 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:34:08.056 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:08.056 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:34:08.056 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:08.056 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:08.056 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:08.056 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:08.056 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:08.056 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:34:08.056 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:34:08.056 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:34:08.056 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:34:08.056 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:34:08.056 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:34:08.056 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:34:08.056 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:34:08.056 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:34:08.056 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:08.056 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:08.056 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:08.056 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:34:08.056 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:34:08.056 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:34:08.056 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:34:08.056 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:08.056 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:08.317 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:08.317 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:34:08.317 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:34:08.317 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:34:08.317 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:08.317 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:34:08.317 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:34:08.317 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:08.317 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:34:08.317 00:34:08.317 --- 10.0.0.3 ping statistics --- 00:34:08.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:08.317 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:34:08.317 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:34:08.317 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:34:08.317 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:34:08.317 00:34:08.317 --- 10.0.0.4 ping statistics --- 00:34:08.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:08.317 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:34:08.317 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:08.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:08.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:34:08.317 00:34:08.317 --- 10.0.0.1 ping statistics --- 00:34:08.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:08.317 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:34:08.317 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:34:08.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:08.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:34:08.317 00:34:08.317 --- 10.0.0.2 ping statistics --- 00:34:08.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:08.317 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:34:08.317 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:08.317 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@461 -- # return 0 00:34:08.317 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:08.317 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:08.317 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:08.317 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:08.317 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:08.317 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:08.317 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:08.317 06:36:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:34:08.317 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:08.317 06:36:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:08.317 06:36:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:08.317 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=126149 00:34:08.317 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 126149 00:34:08.317 06:36:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 126149 ']' 00:34:08.317 06:36:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:08.317 06:36:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:08.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:08.317 06:36:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:08.317 06:36:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:08.317 06:36:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:08.317 06:36:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:34:08.317 [2024-11-27 06:36:27.706210] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:08.317 [2024-11-27 06:36:27.707115] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:34:08.317 [2024-11-27 06:36:27.707169] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:08.317 [2024-11-27 06:36:27.826066] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:08.580 [2024-11-27 06:36:27.850025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:08.580 [2024-11-27 06:36:27.889581] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:08.580 [2024-11-27 06:36:27.889640] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:08.580 [2024-11-27 06:36:27.889648] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:08.580 [2024-11-27 06:36:27.889653] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:08.580 [2024-11-27 06:36:27.889658] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:08.580 [2024-11-27 06:36:27.891242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:08.580 [2024-11-27 06:36:27.891387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:08.580 [2024-11-27 06:36:28.004960] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:08.580 [2024-11-27 06:36:28.005538] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:08.580 [2024-11-27 06:36:28.006081] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:09.152 06:36:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:09.152 06:36:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:34:09.152 06:36:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:09.152 06:36:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:09.152 06:36:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:09.152 06:36:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:09.152 06:36:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:34:09.152 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:34:09.152 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:34:09.152 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:34:09.152 5000+0 records in 00:34:09.152 5000+0 records out 00:34:09.152 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0212142 s, 483 MB/s 00:34:09.152 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile AIO0 2048 00:34:09.152 06:36:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.152 06:36:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:09.414 AIO0 00:34:09.414 06:36:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.414 06:36:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:34:09.414 06:36:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.414 06:36:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:09.414 [2024-11-27 06:36:28.724284] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:09.414 06:36:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.414 06:36:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:09.414 06:36:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.414 06:36:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:09.414 06:36:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.414 06:36:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:34:09.414 06:36:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.414 06:36:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:09.414 06:36:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.414 06:36:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:34:09.414 06:36:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.414 06:36:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:09.414 [2024-11-27 06:36:28.764768] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:34:09.414 06:36:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.414 06:36:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:09.414 06:36:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 126149 0 00:34:09.414 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 126149 0 idle 00:34:09.414 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=126149 00:34:09.414 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:09.414 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:09.414 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:09.414 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:09.414 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:09.414 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:09.414 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:09.414 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:09.414 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:09.414 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 126149 -w 256 00:34:09.414 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:09.414 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 126149 root 20 0 64.2g 45568 32384 S 0.0 0.4 0:00.25 reactor_0' 00:34:09.674 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 126149 root 20 0 64.2g 45568 32384 S 0.0 0.4 0:00.25 reactor_0 00:34:09.674 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:09.674 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:09.674 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:09.674 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:09.674 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:09.674 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:09.674 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:09.674 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:09.674 06:36:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:09.674 06:36:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 126149 1 00:34:09.674 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 126149 1 idle 00:34:09.674 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=126149 00:34:09.674 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:09.674 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:09.674 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:09.674 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:09.674 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:09.674 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:09.674 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:09.674 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:09.674 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:09.674 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:09.674 06:36:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 126149 -w 256 00:34:09.674 06:36:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 126157 root 20 0 64.2g 45568 32384 S 0.0 0.4 0:00.00 reactor_1' 00:34:09.674 06:36:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 126157 root 20 0 64.2g 45568 32384 S 0.0 0.4 0:00.00 reactor_1 00:34:09.674 06:36:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:09.674 06:36:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:09.674 06:36:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:09.674 06:36:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:09.674 06:36:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:09.674 06:36:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:09.674 06:36:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:09.674 06:36:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:09.674 06:36:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:34:09.674 06:36:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=126221 00:34:09.674 06:36:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:09.674 06:36:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:09.674 06:36:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:09.674 06:36:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 126149 0 00:34:09.674 06:36:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 126149 0 busy 00:34:09.674 06:36:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=126149 00:34:09.674 06:36:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:09.674 06:36:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:09.674 06:36:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:09.674 06:36:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:09.674 06:36:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:09.674 06:36:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:09.674 06:36:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:09.674 06:36:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:09.674 06:36:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 126149 -w 256 00:34:09.674 06:36:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:09.934 06:36:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 126149 root 20 0 64.2g 46464 32640 S 12.5 0.4 0:00.27 reactor_0' 00:34:09.934 06:36:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:09.934 06:36:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 126149 root 20 0 64.2g 46464 32640 S 12.5 0.4 0:00.27 reactor_0 00:34:09.934 06:36:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:09.934 06:36:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=12.5 00:34:09.934 06:36:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=12 00:34:09.934 06:36:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:09.934 06:36:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:09.934 06:36:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:34:10.877 06:36:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:34:10.877 06:36:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:10.877 06:36:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 126149 -w 256 00:34:10.877 06:36:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:11.137 06:36:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 126149 root 20 0 64.2g 46720 32640 D 99.9 0.4 0:01.41 reactor_0' 00:34:11.137 06:36:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 126149 root 20 0 64.2g 46720 32640 D 99.9 0.4 0:01.41 reactor_0 00:34:11.137 06:36:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:11.137 06:36:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:11.137 06:36:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:34:11.137 06:36:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:34:11.137 06:36:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:11.137 06:36:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:11.137 06:36:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:11.137 06:36:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:11.137 06:36:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:11.137 06:36:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:11.137 06:36:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 126149 1 00:34:11.137 06:36:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 126149 1 busy 00:34:11.137 06:36:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=126149 00:34:11.137 06:36:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:11.137 06:36:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:11.137 06:36:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:11.137 06:36:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:11.137 06:36:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:11.137 06:36:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:11.137 06:36:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:11.137 06:36:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:11.137 06:36:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:11.137 06:36:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 126149 -w 256 00:34:11.138 06:36:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 126157 root 20 0 64.2g 46720 32640 R 66.7 0.4 0:00.68 reactor_1' 00:34:11.138 06:36:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 126157 root 20 0 64.2g 46720 32640 R 66.7 0.4 0:00.68 reactor_1 00:34:11.138 06:36:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:11.138 06:36:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:11.138 06:36:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=66.7 00:34:11.138 06:36:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=66 00:34:11.138 06:36:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:11.138 06:36:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:11.398 06:36:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:11.398 06:36:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:11.398 06:36:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 126221 00:34:21.398 Initializing NVMe Controllers 00:34:21.398 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:34:21.398 Controller IO queue size 256, less than required. 00:34:21.398 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:21.398 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:34:21.398 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:34:21.398 Initialization complete. Launching workers. 00:34:21.398 ======================================================== 00:34:21.398 Latency(us) 00:34:21.398 Device Information : IOPS MiB/s Average min max 00:34:21.398 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 7208.10 28.16 35572.93 20521.47 102546.72 00:34:21.398 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 7174.30 28.02 35715.33 10672.55 104965.69 00:34:21.398 ======================================================== 00:34:21.398 Total : 14382.40 56.18 35643.96 10672.55 104965.69 00:34:21.398 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 126149 0 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 126149 0 idle 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=126149 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 126149 -w 256 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 126149 root 20 0 64.2g 46720 32640 S 0.0 0.4 0:12.22 reactor_0' 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 126149 root 20 0 64.2g 46720 32640 S 0.0 0.4 0:12.22 reactor_0 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 126149 1 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 126149 1 idle 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=126149 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 126149 -w 256 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 126157 root 20 0 64.2g 46720 32640 S 0.0 0.4 0:06.01 reactor_1' 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 126157 root 20 0 64.2g 46720 32640 S 0.0 0.4 0:06.01 reactor_1 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:34:21.398 06:36:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:34:22.340 06:36:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:22.340 06:36:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:22.340 06:36:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:22.340 06:36:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:34:22.340 06:36:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:22.340 06:36:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:34:22.340 06:36:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:22.340 06:36:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 126149 0 00:34:22.340 06:36:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 126149 0 idle 00:34:22.340 06:36:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=126149 00:34:22.340 06:36:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:22.340 06:36:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:22.340 06:36:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:22.340 06:36:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:22.340 06:36:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:22.340 06:36:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:22.340 06:36:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:22.340 06:36:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:22.340 06:36:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:22.340 06:36:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 126149 -w 256 00:34:22.340 06:36:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:22.602 06:36:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 126149 root 20 0 64.2g 48768 32640 S 6.7 0.4 0:12.28 reactor_0' 00:34:22.602 06:36:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 126149 root 20 0 64.2g 48768 32640 S 6.7 0.4 0:12.28 reactor_0 00:34:22.602 06:36:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:22.602 06:36:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:22.602 06:36:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:34:22.602 06:36:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:34:22.602 06:36:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:22.602 06:36:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:22.602 06:36:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:22.602 06:36:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:22.602 06:36:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:22.602 06:36:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 126149 1 00:34:22.602 06:36:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 126149 1 idle 00:34:22.602 06:36:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=126149 00:34:22.602 06:36:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:22.602 06:36:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:22.602 06:36:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:22.602 06:36:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:22.602 06:36:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:22.602 06:36:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:22.602 06:36:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:22.602 06:36:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:22.602 06:36:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:22.602 06:36:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 126149 -w 256 00:34:22.602 06:36:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:22.864 06:36:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 126157 root 20 0 64.2g 48768 32640 S 0.0 0.4 0:06.01 reactor_1' 00:34:22.864 06:36:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 126157 root 20 0 64.2g 48768 32640 S 0.0 0.4 0:06.01 reactor_1 00:34:22.864 06:36:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:22.864 06:36:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:22.864 06:36:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:22.864 06:36:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:22.864 06:36:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:22.864 06:36:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:22.864 06:36:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:22.864 06:36:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:22.864 06:36:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:22.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:22.864 06:36:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:22.864 06:36:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:34:22.864 06:36:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:22.864 06:36:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:22.864 06:36:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:22.864 06:36:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:22.864 06:36:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:34:22.864 06:36:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:34:22.864 06:36:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:34:22.864 06:36:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:22.864 06:36:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:34:23.125 06:36:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:23.125 06:36:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:34:23.125 06:36:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:23.125 06:36:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:23.125 rmmod nvme_tcp 00:34:23.125 rmmod nvme_fabrics 00:34:23.125 rmmod nvme_keyring 00:34:23.125 06:36:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:23.125 06:36:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:34:23.125 06:36:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:34:23.125 06:36:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 126149 ']' 00:34:23.125 06:36:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 126149 00:34:23.125 06:36:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 126149 ']' 00:34:23.125 06:36:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 126149 00:34:23.125 06:36:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:34:23.125 06:36:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:23.125 06:36:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 126149 00:34:23.125 06:36:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:23.125 06:36:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:23.125 06:36:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 126149' 00:34:23.125 killing process with pid 126149 00:34:23.125 06:36:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 126149 00:34:23.125 06:36:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 126149 00:34:23.387 06:36:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:23.387 06:36:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:23.387 06:36:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:23.387 06:36:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:34:23.387 06:36:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:34:23.387 06:36:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:34:23.387 06:36:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:23.387 06:36:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:23.387 06:36:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:34:23.387 06:36:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:34:23.387 06:36:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:34:23.387 06:36:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:34:23.387 06:36:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:34:23.387 06:36:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:34:23.387 06:36:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:34:23.387 06:36:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:34:23.387 06:36:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:34:23.387 06:36:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:34:23.387 06:36:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:34:23.648 06:36:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:34:23.648 06:36:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:23.648 06:36:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:23.648 06:36:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@246 -- # remove_spdk_ns 00:34:23.648 06:36:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:23.648 06:36:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:23.648 06:36:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:23.648 06:36:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@300 -- # return 0 00:34:23.648 00:34:23.648 real 0m15.977s 00:34:23.648 user 0m28.110s 00:34:23.648 sys 0m5.946s 00:34:23.648 06:36:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:23.648 ************************************ 00:34:23.648 END TEST nvmf_interrupt 00:34:23.648 ************************************ 00:34:23.648 06:36:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:23.648 00:34:23.648 real 25m1.570s 00:34:23.648 user 73m31.591s 00:34:23.648 sys 4m23.804s 00:34:23.648 ************************************ 00:34:23.648 END TEST nvmf_tcp 00:34:23.648 ************************************ 00:34:23.648 06:36:43 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:23.648 06:36:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:23.648 06:36:43 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:34:23.648 06:36:43 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:23.648 06:36:43 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:23.648 06:36:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:23.648 06:36:43 -- common/autotest_common.sh@10 -- # set +x 00:34:23.648 ************************************ 00:34:23.648 START TEST spdkcli_nvmf_tcp 00:34:23.648 ************************************ 00:34:23.648 06:36:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:23.908 * Looking for test storage... 00:34:23.908 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:34:23.908 06:36:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:23.908 06:36:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:34:23.908 06:36:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:23.908 06:36:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:23.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:23.909 --rc genhtml_branch_coverage=1 00:34:23.909 --rc genhtml_function_coverage=1 00:34:23.909 --rc genhtml_legend=1 00:34:23.909 --rc geninfo_all_blocks=1 00:34:23.909 --rc geninfo_unexecuted_blocks=1 00:34:23.909 00:34:23.909 ' 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:23.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:23.909 --rc genhtml_branch_coverage=1 00:34:23.909 --rc genhtml_function_coverage=1 00:34:23.909 --rc genhtml_legend=1 00:34:23.909 --rc geninfo_all_blocks=1 00:34:23.909 --rc geninfo_unexecuted_blocks=1 00:34:23.909 00:34:23.909 ' 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:23.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:23.909 --rc genhtml_branch_coverage=1 00:34:23.909 --rc genhtml_function_coverage=1 00:34:23.909 --rc genhtml_legend=1 00:34:23.909 --rc geninfo_all_blocks=1 00:34:23.909 --rc geninfo_unexecuted_blocks=1 00:34:23.909 00:34:23.909 ' 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:23.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:23.909 --rc genhtml_branch_coverage=1 00:34:23.909 --rc genhtml_function_coverage=1 00:34:23.909 --rc genhtml_legend=1 00:34:23.909 --rc geninfo_all_blocks=1 00:34:23.909 --rc geninfo_unexecuted_blocks=1 00:34:23.909 00:34:23.909 ' 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:23.909 06:36:43 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:23.910 06:36:43 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:23.910 06:36:43 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:23.910 06:36:43 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:23.910 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:23.910 06:36:43 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:23.910 06:36:43 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:23.910 06:36:43 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:23.910 06:36:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:23.910 06:36:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:23.910 06:36:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:23.910 06:36:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:23.910 06:36:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:23.910 06:36:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:23.910 06:36:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:23.910 06:36:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=126556 00:34:23.910 06:36:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 126556 00:34:23.910 06:36:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 126556 ']' 00:34:23.910 06:36:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:23.910 06:36:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:23.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:23.910 06:36:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:23.910 06:36:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:23.910 06:36:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:23.910 06:36:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:23.910 [2024-11-27 06:36:43.341789] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:34:23.910 [2024-11-27 06:36:43.341844] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126556 ] 00:34:24.170 [2024-11-27 06:36:43.456694] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:24.170 [2024-11-27 06:36:43.481372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:24.170 [2024-11-27 06:36:43.510285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:24.170 [2024-11-27 06:36:43.510374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:24.743 06:36:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:24.743 06:36:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:34:24.743 06:36:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:24.743 06:36:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:24.743 06:36:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:25.005 06:36:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:25.005 06:36:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:25.005 06:36:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:25.005 06:36:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:25.005 06:36:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:25.005 06:36:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:25.005 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:25.005 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:25.005 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:25.005 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:25.005 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:25.005 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:25.005 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:25.005 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:25.006 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:25.006 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:25.006 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:25.006 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:25.006 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:25.006 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:25.006 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:25.006 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:25.006 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:25.006 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:25.006 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:25.006 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:25.006 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:25.006 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:25.006 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:25.006 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:25.006 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:25.006 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:25.006 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:25.006 ' 00:34:27.555 [2024-11-27 06:36:46.929177] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:28.942 [2024-11-27 06:36:48.206382] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:31.479 [2024-11-27 06:36:50.552339] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:33.378 [2024-11-27 06:36:52.573978] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:34.753 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:34.753 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:34.753 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:34.753 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:34.753 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:34.753 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:34.753 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:34.753 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:34.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:34.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:34.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:34.753 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:34.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:34.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:34.753 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:34.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:34.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:34.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:34.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:34.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:34.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:34.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:34.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:34.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:34.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:34.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:34.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:34.753 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:34.753 06:36:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:34.753 06:36:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:34.753 06:36:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:34.753 06:36:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:34.753 06:36:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:34.753 06:36:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:34.753 06:36:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:34.753 06:36:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:34:35.321 06:36:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:35.321 06:36:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:35.321 06:36:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:35.321 06:36:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:35.321 06:36:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:35.321 06:36:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:35.321 06:36:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:35.321 06:36:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:35.321 06:36:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:35.321 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:35.321 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:35.321 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:35.321 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:35.321 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:35.321 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:35.321 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:35.321 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:35.321 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:35.321 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:35.321 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:35.321 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:35.321 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:35.321 ' 00:34:40.585 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:40.585 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:40.585 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:40.585 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:40.585 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:40.585 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:40.585 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:40.585 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:40.585 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:40.585 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:40.585 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:40.585 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:40.585 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:40.585 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:40.585 06:37:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:40.585 06:37:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:40.585 06:37:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:40.844 06:37:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 126556 00:34:40.844 06:37:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 126556 ']' 00:34:40.844 06:37:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 126556 00:34:40.844 06:37:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:34:40.844 06:37:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:40.844 06:37:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 126556 00:34:40.844 06:37:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:40.844 06:37:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:40.844 killing process with pid 126556 00:34:40.844 06:37:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 126556' 00:34:40.844 06:37:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 126556 00:34:40.844 06:37:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 126556 00:34:40.844 06:37:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:40.844 06:37:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:40.844 06:37:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 126556 ']' 00:34:40.845 06:37:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 126556 00:34:40.845 06:37:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 126556 ']' 00:34:40.845 06:37:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 126556 00:34:40.845 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (126556) - No such process 00:34:40.845 Process with pid 126556 is not found 00:34:40.845 06:37:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 126556 is not found' 00:34:40.845 06:37:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:40.845 06:37:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:40.845 06:37:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:40.845 00:34:40.845 real 0m17.158s 00:34:40.845 user 0m36.734s 00:34:40.845 sys 0m0.716s 00:34:40.845 06:37:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:40.845 ************************************ 00:34:40.845 END TEST spdkcli_nvmf_tcp 00:34:40.845 ************************************ 00:34:40.845 06:37:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:40.845 06:37:00 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:40.845 06:37:00 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:40.845 06:37:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:40.845 06:37:00 -- common/autotest_common.sh@10 -- # set +x 00:34:40.845 ************************************ 00:34:40.845 START TEST nvmf_identify_passthru 00:34:40.845 ************************************ 00:34:40.845 06:37:00 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:41.105 * Looking for test storage... 00:34:41.105 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:34:41.105 06:37:00 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:41.105 06:37:00 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:34:41.105 06:37:00 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:41.105 06:37:00 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:41.105 06:37:00 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:41.105 06:37:00 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:41.105 06:37:00 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:41.105 06:37:00 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:34:41.105 06:37:00 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:34:41.105 06:37:00 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:34:41.105 06:37:00 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:34:41.105 06:37:00 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:34:41.105 06:37:00 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:34:41.105 06:37:00 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:34:41.105 06:37:00 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:41.105 06:37:00 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:34:41.105 06:37:00 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:34:41.105 06:37:00 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:41.105 06:37:00 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:41.105 06:37:00 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:34:41.105 06:37:00 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:34:41.105 06:37:00 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:41.105 06:37:00 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:34:41.105 06:37:00 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:34:41.105 06:37:00 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:34:41.105 06:37:00 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:34:41.105 06:37:00 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:41.105 06:37:00 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:34:41.105 06:37:00 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:34:41.105 06:37:00 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:41.105 06:37:00 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:41.105 06:37:00 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:34:41.105 06:37:00 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:41.106 06:37:00 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:41.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.106 --rc genhtml_branch_coverage=1 00:34:41.106 --rc genhtml_function_coverage=1 00:34:41.106 --rc genhtml_legend=1 00:34:41.106 --rc geninfo_all_blocks=1 00:34:41.106 --rc geninfo_unexecuted_blocks=1 00:34:41.106 00:34:41.106 ' 00:34:41.106 06:37:00 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:41.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.106 --rc genhtml_branch_coverage=1 00:34:41.106 --rc genhtml_function_coverage=1 00:34:41.106 --rc genhtml_legend=1 00:34:41.106 --rc geninfo_all_blocks=1 00:34:41.106 --rc geninfo_unexecuted_blocks=1 00:34:41.106 00:34:41.106 ' 00:34:41.106 06:37:00 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:41.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.106 --rc genhtml_branch_coverage=1 00:34:41.106 --rc genhtml_function_coverage=1 00:34:41.106 --rc genhtml_legend=1 00:34:41.106 --rc geninfo_all_blocks=1 00:34:41.106 --rc geninfo_unexecuted_blocks=1 00:34:41.106 00:34:41.106 ' 00:34:41.106 06:37:00 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:41.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.106 --rc genhtml_branch_coverage=1 00:34:41.106 --rc genhtml_function_coverage=1 00:34:41.106 --rc genhtml_legend=1 00:34:41.106 --rc geninfo_all_blocks=1 00:34:41.106 --rc geninfo_unexecuted_blocks=1 00:34:41.106 00:34:41.106 ' 00:34:41.106 06:37:00 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:41.106 06:37:00 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:34:41.106 06:37:00 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:41.106 06:37:00 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:41.106 06:37:00 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:41.106 06:37:00 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.106 06:37:00 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.106 06:37:00 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.106 06:37:00 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:41.106 06:37:00 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:41.106 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:41.106 06:37:00 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:41.106 06:37:00 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:34:41.106 06:37:00 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:41.106 06:37:00 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:41.106 06:37:00 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:41.106 06:37:00 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.106 06:37:00 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.106 06:37:00 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.106 06:37:00 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:41.106 06:37:00 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.106 06:37:00 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:41.106 06:37:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:41.106 06:37:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@460 -- # nvmf_veth_init 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:41.106 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:41.107 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:41.107 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:34:41.107 Cannot find device "nvmf_init_br" 00:34:41.107 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:34:41.107 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:34:41.107 Cannot find device "nvmf_init_br2" 00:34:41.107 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:34:41.107 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:34:41.107 Cannot find device "nvmf_tgt_br" 00:34:41.107 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@164 -- # true 00:34:41.107 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:34:41.107 Cannot find device "nvmf_tgt_br2" 00:34:41.107 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@165 -- # true 00:34:41.107 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:34:41.107 Cannot find device "nvmf_init_br" 00:34:41.107 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@166 -- # true 00:34:41.107 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:34:41.107 Cannot find device "nvmf_init_br2" 00:34:41.107 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@167 -- # true 00:34:41.107 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:34:41.107 Cannot find device "nvmf_tgt_br" 00:34:41.107 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@168 -- # true 00:34:41.107 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:34:41.107 Cannot find device "nvmf_tgt_br2" 00:34:41.107 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@169 -- # true 00:34:41.107 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:34:41.107 Cannot find device "nvmf_br" 00:34:41.107 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@170 -- # true 00:34:41.107 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:34:41.107 Cannot find device "nvmf_init_if" 00:34:41.107 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@171 -- # true 00:34:41.107 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:34:41.107 Cannot find device "nvmf_init_if2" 00:34:41.107 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@172 -- # true 00:34:41.107 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:41.107 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:41.107 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@173 -- # true 00:34:41.107 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:41.107 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:41.107 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@174 -- # true 00:34:41.107 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:34:41.107 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:41.107 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:34:41.368 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:41.368 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:41.368 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:41.368 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:41.368 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:41.368 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:34:41.368 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:34:41.368 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:34:41.368 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:34:41.368 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:34:41.368 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:34:41.368 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:34:41.368 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:34:41.368 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:34:41.368 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:41.368 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:41.368 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:41.368 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:34:41.368 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:34:41.369 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:34:41.369 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:34:41.369 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:41.369 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:41.369 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:41.369 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:34:41.369 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:34:41.369 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:34:41.369 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:41.369 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:34:41.369 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:34:41.369 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:41.369 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:34:41.369 00:34:41.369 --- 10.0.0.3 ping statistics --- 00:34:41.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:41.369 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:34:41.369 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:34:41.369 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:34:41.369 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.093 ms 00:34:41.369 00:34:41.369 --- 10.0.0.4 ping statistics --- 00:34:41.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:41.369 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:34:41.369 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:41.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:41.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:34:41.369 00:34:41.369 --- 10.0.0.1 ping statistics --- 00:34:41.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:41.369 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:34:41.369 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:34:41.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:41.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.036 ms 00:34:41.369 00:34:41.369 --- 10.0.0.2 ping statistics --- 00:34:41.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:41.369 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:34:41.369 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:41.369 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@461 -- # return 0 00:34:41.369 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:41.369 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:41.369 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:41.369 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:41.369 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:41.369 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:41.369 06:37:00 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:41.369 06:37:00 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:41.369 06:37:00 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:41.369 06:37:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:41.369 06:37:00 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:41.369 06:37:00 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:34:41.369 06:37:00 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:34:41.369 06:37:00 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:34:41.369 06:37:00 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:34:41.369 06:37:00 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:34:41.369 06:37:00 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:34:41.369 06:37:00 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:41.369 06:37:00 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:34:41.369 06:37:00 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:34:41.369 06:37:00 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:34:41.369 06:37:00 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:34:41.369 06:37:00 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:34:41.369 06:37:00 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:34:41.369 06:37:00 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:34:41.369 06:37:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:34:41.369 06:37:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:41.369 06:37:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:41.630 06:37:01 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:34:41.630 06:37:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:34:41.630 06:37:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:41.630 06:37:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:41.890 06:37:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:34:41.890 06:37:01 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:41.890 06:37:01 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:41.890 06:37:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:41.890 06:37:01 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:41.890 06:37:01 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:41.890 06:37:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:41.890 06:37:01 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=127071 00:34:41.890 06:37:01 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:41.890 06:37:01 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:41.890 06:37:01 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 127071 00:34:41.890 06:37:01 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 127071 ']' 00:34:41.890 06:37:01 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:41.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:41.890 06:37:01 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:41.890 06:37:01 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:41.890 06:37:01 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:41.890 06:37:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:41.890 [2024-11-27 06:37:01.304682] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:34:41.890 [2024-11-27 06:37:01.304755] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:42.151 [2024-11-27 06:37:01.425058] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:42.151 [2024-11-27 06:37:01.446079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:42.151 [2024-11-27 06:37:01.479232] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:42.151 [2024-11-27 06:37:01.479279] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:42.151 [2024-11-27 06:37:01.479286] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:42.151 [2024-11-27 06:37:01.479292] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:42.151 [2024-11-27 06:37:01.479296] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:42.151 [2024-11-27 06:37:01.480382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:42.151 [2024-11-27 06:37:01.480699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:42.151 [2024-11-27 06:37:01.480989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:42.151 [2024-11-27 06:37:01.481180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:42.722 06:37:02 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:42.722 06:37:02 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:34:42.723 06:37:02 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:42.723 06:37:02 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.723 06:37:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:42.723 06:37:02 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.723 06:37:02 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:42.723 06:37:02 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.723 06:37:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:42.984 [2024-11-27 06:37:02.304449] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:42.984 06:37:02 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.984 06:37:02 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:42.984 06:37:02 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.984 06:37:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:42.984 [2024-11-27 06:37:02.314012] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:42.984 06:37:02 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.984 06:37:02 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:42.984 06:37:02 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:42.984 06:37:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:42.984 06:37:02 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:34:42.984 06:37:02 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.984 06:37:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:42.984 Nvme0n1 00:34:42.984 06:37:02 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.984 06:37:02 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:42.984 06:37:02 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.984 06:37:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:42.984 06:37:02 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.984 06:37:02 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:42.984 06:37:02 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.984 06:37:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:42.984 06:37:02 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.984 06:37:02 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:34:42.984 06:37:02 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.984 06:37:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:42.984 [2024-11-27 06:37:02.461253] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:34:42.984 06:37:02 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.984 06:37:02 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:42.984 06:37:02 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.984 06:37:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:42.984 [ 00:34:42.984 { 00:34:42.984 "allow_any_host": true, 00:34:42.984 "hosts": [], 00:34:42.984 "listen_addresses": [], 00:34:42.984 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:42.984 "subtype": "Discovery" 00:34:42.984 }, 00:34:42.984 { 00:34:42.984 "allow_any_host": true, 00:34:42.984 "hosts": [], 00:34:42.984 "listen_addresses": [ 00:34:42.984 { 00:34:42.984 "adrfam": "IPv4", 00:34:42.984 "traddr": "10.0.0.3", 00:34:42.984 "trsvcid": "4420", 00:34:42.984 "trtype": "TCP" 00:34:42.984 } 00:34:42.984 ], 00:34:42.984 "max_cntlid": 65519, 00:34:42.984 "max_namespaces": 1, 00:34:42.984 "min_cntlid": 1, 00:34:42.984 "model_number": "SPDK bdev Controller", 00:34:42.984 "namespaces": [ 00:34:42.984 { 00:34:42.984 "bdev_name": "Nvme0n1", 00:34:42.984 "name": "Nvme0n1", 00:34:42.984 "nguid": "7626E3032DCA476E9F16FE1EEB035AB0", 00:34:42.984 "nsid": 1, 00:34:42.984 "uuid": "7626e303-2dca-476e-9f16-fe1eeb035ab0" 00:34:42.984 } 00:34:42.984 ], 00:34:42.984 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:42.984 "serial_number": "SPDK00000000000001", 00:34:42.984 "subtype": "NVMe" 00:34:42.984 } 00:34:42.984 ] 00:34:42.984 06:37:02 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.984 06:37:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:42.984 06:37:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:42.984 06:37:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:43.245 06:37:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:34:43.245 06:37:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:43.245 06:37:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:43.245 06:37:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:43.505 06:37:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:34:43.505 06:37:02 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:34:43.505 06:37:02 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:34:43.505 06:37:02 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:43.505 06:37:02 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.505 06:37:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:43.505 06:37:02 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.505 06:37:02 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:43.505 06:37:02 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:43.505 06:37:02 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:43.505 06:37:02 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:34:43.765 06:37:03 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:43.765 06:37:03 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:34:43.765 06:37:03 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:43.765 06:37:03 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:43.765 rmmod nvme_tcp 00:34:43.765 rmmod nvme_fabrics 00:34:43.765 rmmod nvme_keyring 00:34:43.765 06:37:03 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:43.765 06:37:03 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:34:43.765 06:37:03 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:34:43.765 06:37:03 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 127071 ']' 00:34:43.765 06:37:03 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 127071 00:34:43.765 06:37:03 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 127071 ']' 00:34:43.765 06:37:03 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 127071 00:34:43.765 06:37:03 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:34:43.765 06:37:03 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:43.765 06:37:03 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 127071 00:34:43.765 killing process with pid 127071 00:34:43.765 06:37:03 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:43.765 06:37:03 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:43.765 06:37:03 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 127071' 00:34:43.765 06:37:03 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 127071 00:34:43.765 06:37:03 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 127071 00:34:44.026 06:37:03 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:44.026 06:37:03 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:44.026 06:37:03 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:44.026 06:37:03 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:34:44.026 06:37:03 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:34:44.026 06:37:03 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:34:44.026 06:37:03 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:44.026 06:37:03 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:44.026 06:37:03 nvmf_identify_passthru -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:34:44.026 06:37:03 nvmf_identify_passthru -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:34:44.026 06:37:03 nvmf_identify_passthru -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:34:44.026 06:37:03 nvmf_identify_passthru -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:34:44.026 06:37:03 nvmf_identify_passthru -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:34:44.026 06:37:03 nvmf_identify_passthru -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:34:44.026 06:37:03 nvmf_identify_passthru -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:34:44.026 06:37:03 nvmf_identify_passthru -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:34:44.026 06:37:03 nvmf_identify_passthru -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:34:44.026 06:37:03 nvmf_identify_passthru -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:34:44.026 06:37:03 nvmf_identify_passthru -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:34:44.026 06:37:03 nvmf_identify_passthru -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:34:44.026 06:37:03 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:44.027 06:37:03 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:44.027 06:37:03 nvmf_identify_passthru -- nvmf/common.sh@246 -- # remove_spdk_ns 00:34:44.027 06:37:03 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:44.027 06:37:03 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:44.027 06:37:03 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:44.027 06:37:03 nvmf_identify_passthru -- nvmf/common.sh@300 -- # return 0 00:34:44.027 00:34:44.027 real 0m3.210s 00:34:44.027 user 0m7.741s 00:34:44.027 sys 0m0.802s 00:34:44.027 06:37:03 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:44.027 ************************************ 00:34:44.027 END TEST nvmf_identify_passthru 00:34:44.027 ************************************ 00:34:44.027 06:37:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:44.289 06:37:03 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:34:44.289 06:37:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:44.289 06:37:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:44.289 06:37:03 -- common/autotest_common.sh@10 -- # set +x 00:34:44.289 ************************************ 00:34:44.289 START TEST nvmf_dif 00:34:44.289 ************************************ 00:34:44.289 06:37:03 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:34:44.289 * Looking for test storage... 00:34:44.289 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:34:44.289 06:37:03 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:44.289 06:37:03 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:34:44.289 06:37:03 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:44.289 06:37:03 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:44.289 06:37:03 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:44.289 06:37:03 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:44.289 06:37:03 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:44.289 06:37:03 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:34:44.289 06:37:03 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:34:44.289 06:37:03 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:34:44.289 06:37:03 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:34:44.289 06:37:03 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:34:44.289 06:37:03 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:34:44.289 06:37:03 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:34:44.289 06:37:03 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:44.289 06:37:03 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:34:44.289 06:37:03 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:34:44.289 06:37:03 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:44.289 06:37:03 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:44.289 06:37:03 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:34:44.289 06:37:03 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:34:44.289 06:37:03 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:44.289 06:37:03 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:34:44.289 06:37:03 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:34:44.289 06:37:03 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:34:44.289 06:37:03 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:34:44.289 06:37:03 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:44.289 06:37:03 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:34:44.289 06:37:03 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:34:44.289 06:37:03 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:44.289 06:37:03 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:44.289 06:37:03 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:34:44.289 06:37:03 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:44.289 06:37:03 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:44.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.289 --rc genhtml_branch_coverage=1 00:34:44.289 --rc genhtml_function_coverage=1 00:34:44.289 --rc genhtml_legend=1 00:34:44.289 --rc geninfo_all_blocks=1 00:34:44.289 --rc geninfo_unexecuted_blocks=1 00:34:44.289 00:34:44.289 ' 00:34:44.289 06:37:03 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:44.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.289 --rc genhtml_branch_coverage=1 00:34:44.289 --rc genhtml_function_coverage=1 00:34:44.289 --rc genhtml_legend=1 00:34:44.289 --rc geninfo_all_blocks=1 00:34:44.289 --rc geninfo_unexecuted_blocks=1 00:34:44.289 00:34:44.289 ' 00:34:44.289 06:37:03 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:44.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.289 --rc genhtml_branch_coverage=1 00:34:44.289 --rc genhtml_function_coverage=1 00:34:44.289 --rc genhtml_legend=1 00:34:44.289 --rc geninfo_all_blocks=1 00:34:44.289 --rc geninfo_unexecuted_blocks=1 00:34:44.289 00:34:44.289 ' 00:34:44.289 06:37:03 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:44.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.289 --rc genhtml_branch_coverage=1 00:34:44.289 --rc genhtml_function_coverage=1 00:34:44.289 --rc genhtml_legend=1 00:34:44.289 --rc geninfo_all_blocks=1 00:34:44.289 --rc geninfo_unexecuted_blocks=1 00:34:44.289 00:34:44.289 ' 00:34:44.289 06:37:03 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:44.289 06:37:03 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:44.289 06:37:03 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:44.289 06:37:03 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:44.289 06:37:03 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:44.289 06:37:03 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:44.289 06:37:03 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:44.289 06:37:03 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:44.289 06:37:03 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:44.289 06:37:03 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:44.289 06:37:03 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:44.289 06:37:03 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:44.289 06:37:03 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:34:44.289 06:37:03 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:34:44.289 06:37:03 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:44.289 06:37:03 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:44.289 06:37:03 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:44.289 06:37:03 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:44.289 06:37:03 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:44.289 06:37:03 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:34:44.289 06:37:03 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:44.289 06:37:03 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:44.289 06:37:03 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:44.289 06:37:03 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.290 06:37:03 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.290 06:37:03 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.290 06:37:03 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:44.290 06:37:03 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:44.290 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:44.290 06:37:03 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:44.290 06:37:03 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:44.290 06:37:03 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:44.290 06:37:03 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:44.290 06:37:03 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:44.290 06:37:03 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:44.290 06:37:03 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:34:44.290 Cannot find device "nvmf_init_br" 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@162 -- # true 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:34:44.290 Cannot find device "nvmf_init_br2" 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@163 -- # true 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:34:44.290 Cannot find device "nvmf_tgt_br" 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@164 -- # true 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:34:44.290 Cannot find device "nvmf_tgt_br2" 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@165 -- # true 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:34:44.290 Cannot find device "nvmf_init_br" 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@166 -- # true 00:34:44.290 06:37:03 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:34:44.551 Cannot find device "nvmf_init_br2" 00:34:44.551 06:37:03 nvmf_dif -- nvmf/common.sh@167 -- # true 00:34:44.551 06:37:03 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:34:44.551 Cannot find device "nvmf_tgt_br" 00:34:44.551 06:37:03 nvmf_dif -- nvmf/common.sh@168 -- # true 00:34:44.551 06:37:03 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:34:44.551 Cannot find device "nvmf_tgt_br2" 00:34:44.551 06:37:03 nvmf_dif -- nvmf/common.sh@169 -- # true 00:34:44.551 06:37:03 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:34:44.551 Cannot find device "nvmf_br" 00:34:44.551 06:37:03 nvmf_dif -- nvmf/common.sh@170 -- # true 00:34:44.551 06:37:03 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:34:44.551 Cannot find device "nvmf_init_if" 00:34:44.551 06:37:03 nvmf_dif -- nvmf/common.sh@171 -- # true 00:34:44.551 06:37:03 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:34:44.551 Cannot find device "nvmf_init_if2" 00:34:44.551 06:37:03 nvmf_dif -- nvmf/common.sh@172 -- # true 00:34:44.551 06:37:03 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:44.551 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:44.551 06:37:03 nvmf_dif -- nvmf/common.sh@173 -- # true 00:34:44.551 06:37:03 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:44.551 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:44.551 06:37:03 nvmf_dif -- nvmf/common.sh@174 -- # true 00:34:44.551 06:37:03 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:34:44.551 06:37:03 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:44.551 06:37:03 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:34:44.551 06:37:03 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:44.551 06:37:03 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:44.551 06:37:03 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:44.551 06:37:03 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:44.551 06:37:03 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:44.551 06:37:03 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:34:44.551 06:37:03 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:34:44.551 06:37:03 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:34:44.551 06:37:03 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:34:44.551 06:37:03 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:34:44.551 06:37:03 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:34:44.551 06:37:03 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:34:44.551 06:37:03 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:34:44.551 06:37:03 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:34:44.551 06:37:03 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:44.551 06:37:03 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:44.551 06:37:04 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:44.551 06:37:04 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:34:44.551 06:37:04 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:34:44.551 06:37:04 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:34:44.551 06:37:04 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:34:44.551 06:37:04 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:44.551 06:37:04 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:44.551 06:37:04 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:44.551 06:37:04 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:34:44.551 06:37:04 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:34:44.551 06:37:04 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:34:44.551 06:37:04 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:44.551 06:37:04 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:34:44.551 06:37:04 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:34:44.551 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:44.551 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.156 ms 00:34:44.551 00:34:44.551 --- 10.0.0.3 ping statistics --- 00:34:44.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:44.551 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:34:44.551 06:37:04 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:34:44.551 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:34:44.551 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.077 ms 00:34:44.551 00:34:44.551 --- 10.0.0.4 ping statistics --- 00:34:44.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:44.551 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:34:44.551 06:37:04 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:44.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:44.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:34:44.812 00:34:44.812 --- 10.0.0.1 ping statistics --- 00:34:44.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:44.812 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:34:44.812 06:37:04 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:34:44.812 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:44.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:34:44.812 00:34:44.812 --- 10.0.0.2 ping statistics --- 00:34:44.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:44.812 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:34:44.812 06:37:04 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:44.812 06:37:04 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:34:44.812 06:37:04 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:44.812 06:37:04 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:34:45.077 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:45.077 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:34:45.077 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:34:45.077 06:37:04 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:45.077 06:37:04 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:45.077 06:37:04 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:45.077 06:37:04 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:45.077 06:37:04 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:45.077 06:37:04 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:45.077 06:37:04 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:45.077 06:37:04 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:34:45.077 06:37:04 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:45.077 06:37:04 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:45.077 06:37:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:45.077 06:37:04 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=127473 00:34:45.077 06:37:04 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 127473 00:34:45.077 06:37:04 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 127473 ']' 00:34:45.077 06:37:04 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:45.077 06:37:04 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:45.077 06:37:04 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:45.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:45.078 06:37:04 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:45.078 06:37:04 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:45.078 06:37:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:45.078 [2024-11-27 06:37:04.524638] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:34:45.078 [2024-11-27 06:37:04.524715] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:45.396 [2024-11-27 06:37:04.642268] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:45.396 [2024-11-27 06:37:04.669263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:45.396 [2024-11-27 06:37:04.697537] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:45.396 [2024-11-27 06:37:04.697585] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:45.396 [2024-11-27 06:37:04.697593] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:45.396 [2024-11-27 06:37:04.697598] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:45.396 [2024-11-27 06:37:04.697603] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:45.396 [2024-11-27 06:37:04.697950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:45.973 06:37:05 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:45.973 06:37:05 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:34:45.973 06:37:05 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:45.973 06:37:05 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:45.973 06:37:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:45.973 06:37:05 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:45.973 06:37:05 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:34:45.973 06:37:05 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:45.973 06:37:05 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.973 06:37:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:45.973 [2024-11-27 06:37:05.478197] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:45.973 06:37:05 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.973 06:37:05 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:45.973 06:37:05 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:45.973 06:37:05 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:45.973 06:37:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:45.973 ************************************ 00:34:45.973 START TEST fio_dif_1_default 00:34:45.973 ************************************ 00:34:45.973 06:37:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:34:45.973 06:37:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:34:45.973 06:37:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:34:45.973 06:37:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:34:45.973 06:37:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:34:45.973 06:37:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:34:45.973 06:37:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:45.973 06:37:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.973 06:37:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:46.234 bdev_null0 00:34:46.234 06:37:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.234 06:37:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:46.234 06:37:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.234 06:37:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:46.234 06:37:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.234 06:37:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:46.234 06:37:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.234 06:37:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:46.234 06:37:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.234 06:37:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:34:46.234 06:37:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.234 06:37:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:46.234 [2024-11-27 06:37:05.530312] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:34:46.234 06:37:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.234 06:37:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:46.234 06:37:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:46.234 06:37:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:46.235 06:37:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:46.235 06:37:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:46.235 06:37:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:46.235 06:37:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:34:46.235 06:37:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:34:46.235 06:37:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:46.235 06:37:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:34:46.235 06:37:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:34:46.235 06:37:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:46.235 06:37:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:46.235 06:37:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:34:46.235 06:37:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:46.235 { 00:34:46.235 "params": { 00:34:46.235 "name": "Nvme$subsystem", 00:34:46.235 "trtype": "$TEST_TRANSPORT", 00:34:46.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:46.235 "adrfam": "ipv4", 00:34:46.235 "trsvcid": "$NVMF_PORT", 00:34:46.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:46.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:46.235 "hdgst": ${hdgst:-false}, 00:34:46.235 "ddgst": ${ddgst:-false} 00:34:46.235 }, 00:34:46.235 "method": "bdev_nvme_attach_controller" 00:34:46.235 } 00:34:46.235 EOF 00:34:46.235 )") 00:34:46.235 06:37:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:46.235 06:37:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:34:46.235 06:37:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:46.235 06:37:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:46.235 06:37:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:34:46.235 06:37:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:46.235 06:37:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:46.235 06:37:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:34:46.235 06:37:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:34:46.235 06:37:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:34:46.235 06:37:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:34:46.235 06:37:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:34:46.235 06:37:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:46.235 "params": { 00:34:46.235 "name": "Nvme0", 00:34:46.235 "trtype": "tcp", 00:34:46.235 "traddr": "10.0.0.3", 00:34:46.235 "adrfam": "ipv4", 00:34:46.235 "trsvcid": "4420", 00:34:46.235 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:46.235 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:46.235 "hdgst": false, 00:34:46.235 "ddgst": false 00:34:46.235 }, 00:34:46.235 "method": "bdev_nvme_attach_controller" 00:34:46.235 }' 00:34:46.235 06:37:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:46.235 06:37:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:46.235 06:37:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:46.235 06:37:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:46.235 06:37:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:46.235 06:37:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:46.235 06:37:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:46.235 06:37:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:46.235 06:37:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:34:46.235 06:37:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:46.235 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:46.235 fio-3.35 00:34:46.235 Starting 1 thread 00:34:58.444 00:34:58.444 filename0: (groupid=0, jobs=1): err= 0: pid=127552: Wed Nov 27 06:37:16 2024 00:34:58.444 read: IOPS=3603, BW=14.1MiB/s (14.8MB/s)(141MiB/10017msec) 00:34:58.444 slat (nsec): min=5436, max=48249, avg=6410.75, stdev=1306.52 00:34:58.444 clat (usec): min=299, max=42428, avg=1092.41, stdev=5317.56 00:34:58.444 lat (usec): min=305, max=42436, avg=1098.82, stdev=5317.50 00:34:58.444 clat percentiles (usec): 00:34:58.444 | 1.00th=[ 314], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 322], 00:34:58.444 | 30.00th=[ 326], 40.00th=[ 330], 50.00th=[ 338], 60.00th=[ 437], 00:34:58.444 | 70.00th=[ 449], 80.00th=[ 457], 90.00th=[ 469], 95.00th=[ 474], 00:34:58.444 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:34:58.444 | 99.99th=[42206] 00:34:58.444 bw ( KiB/s): min= 8224, max=30560, per=100.00%, avg=14438.40, stdev=7088.12, samples=20 00:34:58.444 iops : min= 2056, max= 7640, avg=3609.60, stdev=1772.03, samples=20 00:34:58.444 lat (usec) : 500=98.17%, 750=0.06% 00:34:58.444 lat (msec) : 4=0.01%, 50=1.76% 00:34:58.444 cpu : usr=90.44%, sys=8.06%, ctx=17, majf=0, minf=0 00:34:58.444 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:58.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.444 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.444 issued rwts: total=36100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.444 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:58.444 00:34:58.444 Run status group 0 (all jobs): 00:34:58.444 READ: bw=14.1MiB/s (14.8MB/s), 14.1MiB/s-14.1MiB/s (14.8MB/s-14.8MB/s), io=141MiB (148MB), run=10017-10017msec 00:34:58.444 06:37:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:58.444 06:37:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.445 00:34:58.445 real 0m10.902s 00:34:58.445 user 0m9.578s 00:34:58.445 sys 0m1.049s 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:58.445 ************************************ 00:34:58.445 END TEST fio_dif_1_default 00:34:58.445 ************************************ 00:34:58.445 06:37:16 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:58.445 06:37:16 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:58.445 06:37:16 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:58.445 06:37:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:58.445 ************************************ 00:34:58.445 START TEST fio_dif_1_multi_subsystems 00:34:58.445 ************************************ 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:58.445 bdev_null0 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:58.445 [2024-11-27 06:37:16.465703] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:58.445 bdev_null1 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:58.445 { 00:34:58.445 "params": { 00:34:58.445 "name": "Nvme$subsystem", 00:34:58.445 "trtype": "$TEST_TRANSPORT", 00:34:58.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:58.445 "adrfam": "ipv4", 00:34:58.445 "trsvcid": "$NVMF_PORT", 00:34:58.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:58.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:58.445 "hdgst": ${hdgst:-false}, 00:34:58.445 "ddgst": ${ddgst:-false} 00:34:58.445 }, 00:34:58.445 "method": "bdev_nvme_attach_controller" 00:34:58.445 } 00:34:58.445 EOF 00:34:58.445 )") 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:58.445 { 00:34:58.445 "params": { 00:34:58.445 "name": "Nvme$subsystem", 00:34:58.445 "trtype": "$TEST_TRANSPORT", 00:34:58.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:58.445 "adrfam": "ipv4", 00:34:58.445 "trsvcid": "$NVMF_PORT", 00:34:58.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:58.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:58.445 "hdgst": ${hdgst:-false}, 00:34:58.445 "ddgst": ${ddgst:-false} 00:34:58.445 }, 00:34:58.445 "method": "bdev_nvme_attach_controller" 00:34:58.445 } 00:34:58.445 EOF 00:34:58.445 )") 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:34:58.445 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:58.445 "params": { 00:34:58.445 "name": "Nvme0", 00:34:58.445 "trtype": "tcp", 00:34:58.446 "traddr": "10.0.0.3", 00:34:58.446 "adrfam": "ipv4", 00:34:58.446 "trsvcid": "4420", 00:34:58.446 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:58.446 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:58.446 "hdgst": false, 00:34:58.446 "ddgst": false 00:34:58.446 }, 00:34:58.446 "method": "bdev_nvme_attach_controller" 00:34:58.446 },{ 00:34:58.446 "params": { 00:34:58.446 "name": "Nvme1", 00:34:58.446 "trtype": "tcp", 00:34:58.446 "traddr": "10.0.0.3", 00:34:58.446 "adrfam": "ipv4", 00:34:58.446 "trsvcid": "4420", 00:34:58.446 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:58.446 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:58.446 "hdgst": false, 00:34:58.446 "ddgst": false 00:34:58.446 }, 00:34:58.446 "method": "bdev_nvme_attach_controller" 00:34:58.446 }' 00:34:58.446 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:58.446 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:58.446 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:58.446 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:58.446 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:58.446 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:58.446 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:58.446 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:58.446 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:34:58.446 06:37:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:58.446 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:58.446 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:58.446 fio-3.35 00:34:58.446 Starting 2 threads 00:35:08.430 00:35:08.430 filename0: (groupid=0, jobs=1): err= 0: pid=127712: Wed Nov 27 06:37:27 2024 00:35:08.430 read: IOPS=168, BW=676KiB/s (692kB/s)(6768KiB/10019msec) 00:35:08.430 slat (nsec): min=5497, max=39948, avg=7162.32, stdev=2755.41 00:35:08.430 clat (usec): min=295, max=42361, avg=23664.67, stdev=20063.57 00:35:08.430 lat (usec): min=300, max=42369, avg=23671.83, stdev=20063.34 00:35:08.430 clat percentiles (usec): 00:35:08.430 | 1.00th=[ 302], 5.00th=[ 310], 10.00th=[ 318], 20.00th=[ 330], 00:35:08.431 | 30.00th=[ 347], 40.00th=[ 562], 50.00th=[40633], 60.00th=[41157], 00:35:08.431 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:08.431 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:08.431 | 99.99th=[42206] 00:35:08.431 bw ( KiB/s): min= 512, max= 896, per=45.94%, avg=675.20, stdev=105.32, samples=20 00:35:08.431 iops : min= 128, max= 224, avg=168.80, stdev=26.33, samples=20 00:35:08.431 lat (usec) : 500=37.71%, 750=4.49%, 1000=0.35% 00:35:08.431 lat (msec) : 50=57.45% 00:35:08.431 cpu : usr=96.45%, sys=3.11%, ctx=9, majf=0, minf=0 00:35:08.431 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:08.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:08.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:08.431 issued rwts: total=1692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:08.431 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:08.431 filename1: (groupid=0, jobs=1): err= 0: pid=127713: Wed Nov 27 06:37:27 2024 00:35:08.431 read: IOPS=198, BW=794KiB/s (814kB/s)(7968KiB/10029msec) 00:35:08.431 slat (nsec): min=5555, max=51975, avg=6857.50, stdev=3320.17 00:35:08.431 clat (usec): min=291, max=41387, avg=20118.07, stdev=20258.47 00:35:08.431 lat (usec): min=296, max=41395, avg=20124.93, stdev=20258.28 00:35:08.431 clat percentiles (usec): 00:35:08.431 | 1.00th=[ 297], 5.00th=[ 302], 10.00th=[ 306], 20.00th=[ 314], 00:35:08.431 | 30.00th=[ 318], 40.00th=[ 330], 50.00th=[ 586], 60.00th=[40633], 00:35:08.431 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:08.431 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:35:08.431 | 99.99th=[41157] 00:35:08.431 bw ( KiB/s): min= 512, max= 1056, per=54.11%, avg=795.20, stdev=175.97, samples=20 00:35:08.431 iops : min= 128, max= 264, avg=198.80, stdev=43.99, samples=20 00:35:08.431 lat (usec) : 500=44.93%, 750=6.07%, 1000=0.20% 00:35:08.431 lat (msec) : 50=48.80% 00:35:08.431 cpu : usr=96.88%, sys=2.73%, ctx=301, majf=0, minf=0 00:35:08.431 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:08.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:08.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:08.431 issued rwts: total=1992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:08.431 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:08.431 00:35:08.431 Run status group 0 (all jobs): 00:35:08.431 READ: bw=1469KiB/s (1505kB/s), 676KiB/s-794KiB/s (692kB/s-814kB/s), io=14.4MiB (15.1MB), run=10019-10029msec 00:35:08.431 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:08.431 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:08.431 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:08.431 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:08.431 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:08.431 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:08.431 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.431 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:08.431 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.431 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:08.431 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.431 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:08.431 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.431 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:08.431 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:08.431 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:08.431 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:08.431 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.431 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:08.431 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.431 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:08.431 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.431 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:08.431 ************************************ 00:35:08.431 END TEST fio_dif_1_multi_subsystems 00:35:08.431 ************************************ 00:35:08.431 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.431 00:35:08.431 real 0m11.007s 00:35:08.431 user 0m20.040s 00:35:08.431 sys 0m0.778s 00:35:08.431 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:08.431 06:37:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:08.431 06:37:27 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:08.431 06:37:27 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:08.431 06:37:27 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:08.431 06:37:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:08.431 ************************************ 00:35:08.431 START TEST fio_dif_rand_params 00:35:08.431 ************************************ 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:08.431 bdev_null0 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:08.431 [2024-11-27 06:37:27.523319] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:08.431 06:37:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:08.431 { 00:35:08.431 "params": { 00:35:08.431 "name": "Nvme$subsystem", 00:35:08.431 "trtype": "$TEST_TRANSPORT", 00:35:08.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:08.431 "adrfam": "ipv4", 00:35:08.431 "trsvcid": "$NVMF_PORT", 00:35:08.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:08.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:08.432 "hdgst": ${hdgst:-false}, 00:35:08.432 "ddgst": ${ddgst:-false} 00:35:08.432 }, 00:35:08.432 "method": "bdev_nvme_attach_controller" 00:35:08.432 } 00:35:08.432 EOF 00:35:08.432 )") 00:35:08.432 06:37:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:08.432 06:37:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:08.432 06:37:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:08.432 06:37:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:35:08.432 06:37:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:08.432 06:37:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:08.432 06:37:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:08.432 06:37:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:08.432 06:37:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:08.432 "params": { 00:35:08.432 "name": "Nvme0", 00:35:08.432 "trtype": "tcp", 00:35:08.432 "traddr": "10.0.0.3", 00:35:08.432 "adrfam": "ipv4", 00:35:08.432 "trsvcid": "4420", 00:35:08.432 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:08.432 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:08.432 "hdgst": false, 00:35:08.432 "ddgst": false 00:35:08.432 }, 00:35:08.432 "method": "bdev_nvme_attach_controller" 00:35:08.432 }' 00:35:08.432 06:37:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:08.432 06:37:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:08.432 06:37:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:08.432 06:37:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:08.432 06:37:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:08.432 06:37:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:08.432 06:37:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:08.432 06:37:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:08.432 06:37:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:35:08.432 06:37:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:08.432 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:08.432 ... 00:35:08.432 fio-3.35 00:35:08.432 Starting 3 threads 00:35:13.711 00:35:13.711 filename0: (groupid=0, jobs=1): err= 0: pid=127873: Wed Nov 27 06:37:33 2024 00:35:13.711 read: IOPS=233, BW=29.1MiB/s (30.5MB/s)(146MiB/5003msec) 00:35:13.711 slat (nsec): min=5485, max=49458, avg=7297.72, stdev=2072.88 00:35:13.711 clat (usec): min=3142, max=52646, avg=12858.09, stdev=12792.87 00:35:13.711 lat (usec): min=3147, max=52654, avg=12865.39, stdev=12792.90 00:35:13.711 clat percentiles (usec): 00:35:13.711 | 1.00th=[ 4752], 5.00th=[ 5211], 10.00th=[ 5538], 20.00th=[ 6194], 00:35:13.711 | 30.00th=[ 8225], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9241], 00:35:13.711 | 70.00th=[ 9634], 80.00th=[10552], 90.00th=[46400], 95.00th=[49021], 00:35:13.711 | 99.00th=[50594], 99.50th=[51119], 99.90th=[52167], 99.95th=[52691], 00:35:13.711 | 99.99th=[52691] 00:35:13.711 bw ( KiB/s): min=23040, max=33792, per=24.55%, avg=30122.67, stdev=3132.73, samples=9 00:35:13.711 iops : min= 180, max= 264, avg=235.33, stdev=24.47, samples=9 00:35:13.711 lat (msec) : 4=0.60%, 10=75.47%, 20=12.86%, 50=8.83%, 100=2.23% 00:35:13.711 cpu : usr=95.86%, sys=3.24%, ctx=26, majf=0, minf=0 00:35:13.711 IO depths : 1=4.6%, 2=95.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:13.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.711 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.711 issued rwts: total=1166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:13.711 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:13.711 filename0: (groupid=0, jobs=1): err= 0: pid=127874: Wed Nov 27 06:37:33 2024 00:35:13.711 read: IOPS=314, BW=39.3MiB/s (41.2MB/s)(197MiB/5006msec) 00:35:13.711 slat (nsec): min=4857, max=22716, avg=7200.44, stdev=1340.97 00:35:13.711 clat (usec): min=3034, max=49774, avg=9534.27, stdev=9589.75 00:35:13.711 lat (usec): min=3041, max=49780, avg=9541.47, stdev=9589.68 00:35:13.711 clat percentiles (usec): 00:35:13.711 | 1.00th=[ 3064], 5.00th=[ 5080], 10.00th=[ 5342], 20.00th=[ 5735], 00:35:13.711 | 30.00th=[ 6063], 40.00th=[ 6980], 50.00th=[ 7439], 60.00th=[ 7701], 00:35:13.711 | 70.00th=[ 7963], 80.00th=[ 8356], 90.00th=[10421], 95.00th=[45876], 00:35:13.711 | 99.00th=[48497], 99.50th=[48497], 99.90th=[49546], 99.95th=[49546], 00:35:13.711 | 99.99th=[49546] 00:35:13.711 bw ( KiB/s): min=24832, max=48640, per=32.77%, avg=40217.60, stdev=6548.41, samples=10 00:35:13.711 iops : min= 194, max= 380, avg=314.20, stdev=51.16, samples=10 00:35:13.711 lat (msec) : 4=1.34%, 10=85.57%, 20=7.18%, 50=5.91% 00:35:13.711 cpu : usr=95.20%, sys=3.88%, ctx=7, majf=0, minf=0 00:35:13.711 IO depths : 1=1.8%, 2=98.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:13.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.711 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.711 issued rwts: total=1573,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:13.711 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:13.711 filename0: (groupid=0, jobs=1): err= 0: pid=127875: Wed Nov 27 06:37:33 2024 00:35:13.711 read: IOPS=411, BW=51.5MiB/s (54.0MB/s)(258MiB/5002msec) 00:35:13.711 slat (nsec): min=5471, max=27439, avg=7111.07, stdev=1380.44 00:35:13.711 clat (usec): min=2576, max=48934, avg=7275.19, stdev=4021.65 00:35:13.711 lat (usec): min=2582, max=48940, avg=7282.30, stdev=4021.80 00:35:13.711 clat percentiles (usec): 00:35:13.711 | 1.00th=[ 2868], 5.00th=[ 2999], 10.00th=[ 3064], 20.00th=[ 3130], 00:35:13.711 | 30.00th=[ 4359], 40.00th=[ 6259], 50.00th=[ 6652], 60.00th=[ 7570], 00:35:13.711 | 70.00th=[ 9765], 80.00th=[10552], 90.00th=[11207], 95.00th=[12911], 00:35:13.711 | 99.00th=[15664], 99.50th=[16319], 99.90th=[46924], 99.95th=[46924], 00:35:13.711 | 99.99th=[49021] 00:35:13.711 bw ( KiB/s): min=39680, max=66304, per=44.85%, avg=55040.00, stdev=8089.36, samples=9 00:35:13.711 iops : min= 310, max= 518, avg=430.00, stdev=63.20, samples=9 00:35:13.711 lat (msec) : 4=29.13%, 10=42.72%, 20=27.86%, 50=0.29% 00:35:13.711 cpu : usr=95.46%, sys=3.40%, ctx=12, majf=0, minf=0 00:35:13.711 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:13.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.711 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.711 issued rwts: total=2060,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:13.711 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:13.711 00:35:13.711 Run status group 0 (all jobs): 00:35:13.711 READ: bw=120MiB/s (126MB/s), 29.1MiB/s-51.5MiB/s (30.5MB/s-54.0MB/s), io=600MiB (629MB), run=5002-5006msec 00:35:13.973 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:13.973 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:13.973 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:13.973 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:13.973 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:13.973 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:13.973 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.973 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:13.973 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.973 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:13.973 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.973 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:13.973 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.973 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:13.973 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:13.973 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:13.973 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:13.973 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:13.973 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:13.973 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:13.973 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:13.973 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:13.973 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:13.973 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:13.973 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:13.973 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.973 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:13.973 bdev_null0 00:35:13.973 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.973 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:13.973 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.973 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:13.973 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:13.974 [2024-11-27 06:37:33.373915] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:13.974 bdev_null1 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:13.974 bdev_null2 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:13.974 { 00:35:13.974 "params": { 00:35:13.974 "name": "Nvme$subsystem", 00:35:13.974 "trtype": "$TEST_TRANSPORT", 00:35:13.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:13.974 "adrfam": "ipv4", 00:35:13.974 "trsvcid": "$NVMF_PORT", 00:35:13.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:13.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:13.974 "hdgst": ${hdgst:-false}, 00:35:13.974 "ddgst": ${ddgst:-false} 00:35:13.974 }, 00:35:13.974 "method": "bdev_nvme_attach_controller" 00:35:13.974 } 00:35:13.974 EOF 00:35:13.974 )") 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:13.974 { 00:35:13.974 "params": { 00:35:13.974 "name": "Nvme$subsystem", 00:35:13.974 "trtype": "$TEST_TRANSPORT", 00:35:13.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:13.974 "adrfam": "ipv4", 00:35:13.974 "trsvcid": "$NVMF_PORT", 00:35:13.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:13.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:13.974 "hdgst": ${hdgst:-false}, 00:35:13.974 "ddgst": ${ddgst:-false} 00:35:13.974 }, 00:35:13.974 "method": "bdev_nvme_attach_controller" 00:35:13.974 } 00:35:13.974 EOF 00:35:13.974 )") 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:13.974 { 00:35:13.974 "params": { 00:35:13.974 "name": "Nvme$subsystem", 00:35:13.974 "trtype": "$TEST_TRANSPORT", 00:35:13.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:13.974 "adrfam": "ipv4", 00:35:13.974 "trsvcid": "$NVMF_PORT", 00:35:13.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:13.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:13.974 "hdgst": ${hdgst:-false}, 00:35:13.974 "ddgst": ${ddgst:-false} 00:35:13.974 }, 00:35:13.974 "method": "bdev_nvme_attach_controller" 00:35:13.974 } 00:35:13.974 EOF 00:35:13.974 )") 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:13.974 06:37:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:13.974 "params": { 00:35:13.974 "name": "Nvme0", 00:35:13.974 "trtype": "tcp", 00:35:13.974 "traddr": "10.0.0.3", 00:35:13.974 "adrfam": "ipv4", 00:35:13.974 "trsvcid": "4420", 00:35:13.974 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:13.974 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:13.974 "hdgst": false, 00:35:13.974 "ddgst": false 00:35:13.974 }, 00:35:13.974 "method": "bdev_nvme_attach_controller" 00:35:13.974 },{ 00:35:13.975 "params": { 00:35:13.975 "name": "Nvme1", 00:35:13.975 "trtype": "tcp", 00:35:13.975 "traddr": "10.0.0.3", 00:35:13.975 "adrfam": "ipv4", 00:35:13.975 "trsvcid": "4420", 00:35:13.975 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:13.975 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:13.975 "hdgst": false, 00:35:13.975 "ddgst": false 00:35:13.975 }, 00:35:13.975 "method": "bdev_nvme_attach_controller" 00:35:13.975 },{ 00:35:13.975 "params": { 00:35:13.975 "name": "Nvme2", 00:35:13.975 "trtype": "tcp", 00:35:13.975 "traddr": "10.0.0.3", 00:35:13.975 "adrfam": "ipv4", 00:35:13.975 "trsvcid": "4420", 00:35:13.975 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:13.975 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:13.975 "hdgst": false, 00:35:13.975 "ddgst": false 00:35:13.975 }, 00:35:13.975 "method": "bdev_nvme_attach_controller" 00:35:13.975 }' 00:35:13.975 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:13.975 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:13.975 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:13.975 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:13.975 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:13.975 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:13.975 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:13.975 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:13.975 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:35:13.975 06:37:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:14.236 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:14.236 ... 00:35:14.236 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:14.236 ... 00:35:14.236 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:14.236 ... 00:35:14.236 fio-3.35 00:35:14.236 Starting 24 threads 00:35:26.442 00:35:26.442 filename0: (groupid=0, jobs=1): err= 0: pid=127987: Wed Nov 27 06:37:44 2024 00:35:26.442 read: IOPS=329, BW=1318KiB/s (1350kB/s)(12.9MiB/10030msec) 00:35:26.442 slat (usec): min=4, max=8018, avg=12.81, stdev=196.96 00:35:26.442 clat (msec): min=21, max=125, avg=48.43, stdev=14.70 00:35:26.442 lat (msec): min=21, max=125, avg=48.44, stdev=14.70 00:35:26.442 clat percentiles (msec): 00:35:26.442 | 1.00th=[ 24], 5.00th=[ 28], 10.00th=[ 34], 20.00th=[ 36], 00:35:26.442 | 30.00th=[ 39], 40.00th=[ 46], 50.00th=[ 48], 60.00th=[ 48], 00:35:26.442 | 70.00th=[ 52], 80.00th=[ 61], 90.00th=[ 66], 95.00th=[ 72], 00:35:26.442 | 99.00th=[ 96], 99.50th=[ 108], 99.90th=[ 126], 99.95th=[ 126], 00:35:26.442 | 99.99th=[ 126] 00:35:26.442 bw ( KiB/s): min= 896, max= 1600, per=4.24%, avg=1315.55, stdev=162.60, samples=20 00:35:26.442 iops : min= 224, max= 400, avg=328.85, stdev=40.62, samples=20 00:35:26.442 lat (msec) : 50=67.09%, 100=32.30%, 250=0.60% 00:35:26.442 cpu : usr=33.36%, sys=0.33%, ctx=921, majf=0, minf=9 00:35:26.442 IO depths : 1=0.6%, 2=1.5%, 4=7.1%, 8=77.7%, 16=13.1%, 32=0.0%, >=64=0.0% 00:35:26.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.442 complete : 0=0.0%, 4=89.5%, 8=6.3%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.443 issued rwts: total=3306,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.443 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.443 filename0: (groupid=0, jobs=1): err= 0: pid=127988: Wed Nov 27 06:37:44 2024 00:35:26.443 read: IOPS=321, BW=1287KiB/s (1318kB/s)(12.6MiB/10028msec) 00:35:26.443 slat (nsec): min=4813, max=63877, avg=7873.06, stdev=5023.88 00:35:26.443 clat (msec): min=20, max=106, avg=49.67, stdev=14.06 00:35:26.443 lat (msec): min=20, max=106, avg=49.68, stdev=14.06 00:35:26.443 clat percentiles (msec): 00:35:26.443 | 1.00th=[ 23], 5.00th=[ 31], 10.00th=[ 33], 20.00th=[ 37], 00:35:26.443 | 30.00th=[ 44], 40.00th=[ 47], 50.00th=[ 48], 60.00th=[ 52], 00:35:26.443 | 70.00th=[ 56], 80.00th=[ 61], 90.00th=[ 69], 95.00th=[ 72], 00:35:26.443 | 99.00th=[ 90], 99.50th=[ 97], 99.90th=[ 107], 99.95th=[ 107], 00:35:26.443 | 99.99th=[ 107] 00:35:26.443 bw ( KiB/s): min= 1024, max= 1584, per=4.14%, avg=1283.10, stdev=164.26, samples=20 00:35:26.443 iops : min= 256, max= 396, avg=320.75, stdev=41.07, samples=20 00:35:26.443 lat (msec) : 50=56.57%, 100=43.03%, 250=0.40% 00:35:26.443 cpu : usr=41.97%, sys=0.42%, ctx=1351, majf=0, minf=9 00:35:26.443 IO depths : 1=1.7%, 2=3.9%, 4=12.4%, 8=70.3%, 16=11.6%, 32=0.0%, >=64=0.0% 00:35:26.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.443 complete : 0=0.0%, 4=90.8%, 8=4.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.443 issued rwts: total=3226,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.443 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.443 filename0: (groupid=0, jobs=1): err= 0: pid=127989: Wed Nov 27 06:37:44 2024 00:35:26.443 read: IOPS=324, BW=1298KiB/s (1330kB/s)(12.7MiB/10031msec) 00:35:26.443 slat (usec): min=4, max=8018, avg=10.60, stdev=140.47 00:35:26.443 clat (msec): min=20, max=120, avg=49.18, stdev=13.88 00:35:26.443 lat (msec): min=20, max=120, avg=49.19, stdev=13.88 00:35:26.443 clat percentiles (msec): 00:35:26.443 | 1.00th=[ 24], 5.00th=[ 31], 10.00th=[ 34], 20.00th=[ 36], 00:35:26.443 | 30.00th=[ 41], 40.00th=[ 47], 50.00th=[ 48], 60.00th=[ 50], 00:35:26.443 | 70.00th=[ 57], 80.00th=[ 61], 90.00th=[ 64], 95.00th=[ 72], 00:35:26.443 | 99.00th=[ 85], 99.50th=[ 99], 99.90th=[ 121], 99.95th=[ 121], 00:35:26.443 | 99.99th=[ 121] 00:35:26.443 bw ( KiB/s): min= 1024, max= 1552, per=4.18%, avg=1295.60, stdev=157.12, samples=20 00:35:26.443 iops : min= 256, max= 388, avg=323.90, stdev=39.28, samples=20 00:35:26.443 lat (msec) : 50=62.44%, 100=37.10%, 250=0.46% 00:35:26.443 cpu : usr=33.82%, sys=0.34%, ctx=949, majf=0, minf=9 00:35:26.443 IO depths : 1=1.1%, 2=2.9%, 4=11.3%, 8=72.6%, 16=12.1%, 32=0.0%, >=64=0.0% 00:35:26.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.443 complete : 0=0.0%, 4=90.3%, 8=4.9%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.443 issued rwts: total=3256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.443 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.443 filename0: (groupid=0, jobs=1): err= 0: pid=127990: Wed Nov 27 06:37:44 2024 00:35:26.443 read: IOPS=309, BW=1237KiB/s (1267kB/s)(12.1MiB/10045msec) 00:35:26.443 slat (usec): min=3, max=8012, avg=10.40, stdev=143.68 00:35:26.443 clat (msec): min=14, max=120, avg=51.62, stdev=14.72 00:35:26.443 lat (msec): min=14, max=120, avg=51.63, stdev=14.72 00:35:26.443 clat percentiles (msec): 00:35:26.443 | 1.00th=[ 21], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 37], 00:35:26.443 | 30.00th=[ 47], 40.00th=[ 48], 50.00th=[ 50], 60.00th=[ 56], 00:35:26.443 | 70.00th=[ 60], 80.00th=[ 62], 90.00th=[ 72], 95.00th=[ 74], 00:35:26.443 | 99.00th=[ 96], 99.50th=[ 99], 99.90th=[ 121], 99.95th=[ 121], 00:35:26.443 | 99.99th=[ 121] 00:35:26.443 bw ( KiB/s): min= 1072, max= 1456, per=3.99%, avg=1236.50, stdev=114.60, samples=20 00:35:26.443 iops : min= 268, max= 364, avg=309.10, stdev=28.61, samples=20 00:35:26.443 lat (msec) : 20=0.84%, 50=52.11%, 100=46.60%, 250=0.45% 00:35:26.443 cpu : usr=35.55%, sys=0.21%, ctx=1083, majf=0, minf=9 00:35:26.443 IO depths : 1=0.6%, 2=1.6%, 4=7.9%, 8=76.3%, 16=13.5%, 32=0.0%, >=64=0.0% 00:35:26.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.443 complete : 0=0.0%, 4=89.8%, 8=6.1%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.443 issued rwts: total=3107,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.443 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.443 filename0: (groupid=0, jobs=1): err= 0: pid=127991: Wed Nov 27 06:37:44 2024 00:35:26.443 read: IOPS=374, BW=1499KiB/s (1535kB/s)(14.7MiB/10036msec) 00:35:26.443 slat (usec): min=5, max=4015, avg= 8.52, stdev=65.51 00:35:26.443 clat (usec): min=1422, max=95881, avg=42586.77, stdev=15035.07 00:35:26.443 lat (usec): min=1428, max=95887, avg=42595.30, stdev=15036.08 00:35:26.443 clat percentiles (usec): 00:35:26.443 | 1.00th=[ 1532], 5.00th=[16319], 10.00th=[25560], 20.00th=[31851], 00:35:26.443 | 30.00th=[34866], 40.00th=[37487], 50.00th=[41157], 60.00th=[47973], 00:35:26.443 | 70.00th=[50070], 80.00th=[55837], 90.00th=[62129], 95.00th=[68682], 00:35:26.443 | 99.00th=[71828], 99.50th=[83362], 99.90th=[95945], 99.95th=[95945], 00:35:26.443 | 99.99th=[95945] 00:35:26.443 bw ( KiB/s): min= 984, max= 2926, per=4.83%, avg=1497.50, stdev=378.91, samples=20 00:35:26.443 iops : min= 246, max= 731, avg=374.35, stdev=94.63, samples=20 00:35:26.443 lat (msec) : 2=1.70%, 4=0.43%, 10=1.70%, 20=1.86%, 50=64.83% 00:35:26.443 lat (msec) : 100=29.48% 00:35:26.443 cpu : usr=43.00%, sys=0.44%, ctx=1163, majf=0, minf=9 00:35:26.443 IO depths : 1=1.0%, 2=2.1%, 4=8.5%, 8=75.9%, 16=12.6%, 32=0.0%, >=64=0.0% 00:35:26.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.443 complete : 0=0.0%, 4=89.7%, 8=5.8%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.443 issued rwts: total=3762,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.443 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.443 filename0: (groupid=0, jobs=1): err= 0: pid=127992: Wed Nov 27 06:37:44 2024 00:35:26.443 read: IOPS=315, BW=1262KiB/s (1293kB/s)(12.4MiB/10020msec) 00:35:26.443 slat (usec): min=4, max=8013, avg=11.36, stdev=159.25 00:35:26.443 clat (usec): min=22304, max=95807, avg=50603.09, stdev=13203.04 00:35:26.443 lat (usec): min=22310, max=95816, avg=50614.45, stdev=13209.95 00:35:26.443 clat percentiles (usec): 00:35:26.443 | 1.00th=[23987], 5.00th=[31589], 10.00th=[33162], 20.00th=[39060], 00:35:26.443 | 30.00th=[44827], 40.00th=[47973], 50.00th=[47973], 60.00th=[51119], 00:35:26.443 | 70.00th=[55837], 80.00th=[61604], 90.00th=[69731], 95.00th=[72877], 00:35:26.443 | 99.00th=[85459], 99.50th=[89654], 99.90th=[95945], 99.95th=[95945], 00:35:26.443 | 99.99th=[95945] 00:35:26.443 bw ( KiB/s): min= 1024, max= 1432, per=4.06%, avg=1258.40, stdev=118.24, samples=20 00:35:26.443 iops : min= 256, max= 358, avg=314.60, stdev=29.56, samples=20 00:35:26.443 lat (msec) : 50=58.25%, 100=41.75% 00:35:26.443 cpu : usr=43.60%, sys=0.42%, ctx=1233, majf=0, minf=9 00:35:26.443 IO depths : 1=1.7%, 2=4.1%, 4=14.2%, 8=68.5%, 16=11.6%, 32=0.0%, >=64=0.0% 00:35:26.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.443 complete : 0=0.0%, 4=90.8%, 8=4.3%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.443 issued rwts: total=3162,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.443 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.443 filename0: (groupid=0, jobs=1): err= 0: pid=127993: Wed Nov 27 06:37:44 2024 00:35:26.443 read: IOPS=342, BW=1371KiB/s (1404kB/s)(13.4MiB/10026msec) 00:35:26.443 slat (usec): min=3, max=8012, avg=11.95, stdev=170.65 00:35:26.443 clat (msec): min=20, max=133, avg=46.56, stdev=14.01 00:35:26.443 lat (msec): min=20, max=133, avg=46.57, stdev=14.02 00:35:26.443 clat percentiles (msec): 00:35:26.443 | 1.00th=[ 24], 5.00th=[ 29], 10.00th=[ 32], 20.00th=[ 35], 00:35:26.443 | 30.00th=[ 37], 40.00th=[ 41], 50.00th=[ 46], 60.00th=[ 48], 00:35:26.443 | 70.00th=[ 54], 80.00th=[ 58], 90.00th=[ 65], 95.00th=[ 72], 00:35:26.443 | 99.00th=[ 84], 99.50th=[ 96], 99.90th=[ 134], 99.95th=[ 134], 00:35:26.443 | 99.99th=[ 134] 00:35:26.443 bw ( KiB/s): min= 1024, max= 1680, per=4.41%, avg=1368.10, stdev=171.59, samples=20 00:35:26.443 iops : min= 256, max= 420, avg=342.00, stdev=42.89, samples=20 00:35:26.443 lat (msec) : 50=65.16%, 100=34.55%, 250=0.29% 00:35:26.443 cpu : usr=39.88%, sys=0.45%, ctx=1337, majf=0, minf=9 00:35:26.443 IO depths : 1=1.0%, 2=2.4%, 4=10.4%, 8=73.8%, 16=12.3%, 32=0.0%, >=64=0.0% 00:35:26.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.443 complete : 0=0.0%, 4=89.9%, 8=5.4%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.443 issued rwts: total=3436,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.443 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.443 filename0: (groupid=0, jobs=1): err= 0: pid=127994: Wed Nov 27 06:37:44 2024 00:35:26.443 read: IOPS=361, BW=1448KiB/s (1483kB/s)(14.2MiB/10034msec) 00:35:26.443 slat (usec): min=3, max=1002, avg= 7.83, stdev=17.13 00:35:26.443 clat (usec): min=7556, max=92888, avg=44117.83, stdev=13095.50 00:35:26.443 lat (usec): min=7570, max=92894, avg=44125.67, stdev=13095.61 00:35:26.443 clat percentiles (usec): 00:35:26.443 | 1.00th=[13829], 5.00th=[24511], 10.00th=[29492], 20.00th=[32637], 00:35:26.443 | 30.00th=[35914], 40.00th=[38536], 50.00th=[45351], 60.00th=[47973], 00:35:26.443 | 70.00th=[49546], 80.00th=[55837], 90.00th=[60031], 95.00th=[66323], 00:35:26.443 | 99.00th=[74974], 99.50th=[83362], 99.90th=[92799], 99.95th=[92799], 00:35:26.443 | 99.99th=[92799] 00:35:26.443 bw ( KiB/s): min= 1200, max= 1715, per=4.67%, avg=1448.15, stdev=153.65, samples=20 00:35:26.443 iops : min= 300, max= 428, avg=362.00, stdev=38.34, samples=20 00:35:26.443 lat (msec) : 10=0.88%, 20=0.44%, 50=70.57%, 100=28.11% 00:35:26.443 cpu : usr=41.48%, sys=0.33%, ctx=1340, majf=0, minf=9 00:35:26.443 IO depths : 1=1.0%, 2=2.2%, 4=9.4%, 8=75.0%, 16=12.3%, 32=0.0%, >=64=0.0% 00:35:26.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.444 complete : 0=0.0%, 4=89.7%, 8=5.6%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.444 issued rwts: total=3632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.444 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.444 filename1: (groupid=0, jobs=1): err= 0: pid=127995: Wed Nov 27 06:37:44 2024 00:35:26.444 read: IOPS=295, BW=1181KiB/s (1209kB/s)(11.5MiB/10011msec) 00:35:26.444 slat (usec): min=4, max=5014, avg=12.29, stdev=139.13 00:35:26.444 clat (usec): min=24290, max=93844, avg=54058.67, stdev=12398.45 00:35:26.444 lat (usec): min=24296, max=93857, avg=54070.96, stdev=12393.93 00:35:26.444 clat percentiles (usec): 00:35:26.444 | 1.00th=[29754], 5.00th=[33424], 10.00th=[39060], 20.00th=[45876], 00:35:26.444 | 30.00th=[47973], 40.00th=[48497], 50.00th=[51119], 60.00th=[55837], 00:35:26.444 | 70.00th=[60031], 80.00th=[63701], 90.00th=[70779], 95.00th=[76022], 00:35:26.444 | 99.00th=[90702], 99.50th=[93848], 99.90th=[93848], 99.95th=[93848], 00:35:26.444 | 99.99th=[93848] 00:35:26.444 bw ( KiB/s): min= 1002, max= 1536, per=3.79%, avg=1176.10, stdev=137.21, samples=20 00:35:26.444 iops : min= 250, max= 384, avg=294.00, stdev=34.34, samples=20 00:35:26.444 lat (msec) : 50=47.80%, 100=52.20% 00:35:26.444 cpu : usr=41.65%, sys=0.47%, ctx=1167, majf=0, minf=9 00:35:26.444 IO depths : 1=2.4%, 2=6.0%, 4=16.5%, 8=64.4%, 16=10.7%, 32=0.0%, >=64=0.0% 00:35:26.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.444 complete : 0=0.0%, 4=91.9%, 8=2.9%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.444 issued rwts: total=2956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.444 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.444 filename1: (groupid=0, jobs=1): err= 0: pid=127996: Wed Nov 27 06:37:44 2024 00:35:26.444 read: IOPS=304, BW=1219KiB/s (1248kB/s)(11.9MiB/10021msec) 00:35:26.444 slat (usec): min=3, max=8014, avg=13.72, stdev=177.66 00:35:26.444 clat (msec): min=23, max=127, avg=52.40, stdev=13.47 00:35:26.444 lat (msec): min=23, max=127, avg=52.41, stdev=13.47 00:35:26.444 clat percentiles (msec): 00:35:26.444 | 1.00th=[ 29], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 44], 00:35:26.444 | 30.00th=[ 48], 40.00th=[ 48], 50.00th=[ 50], 60.00th=[ 53], 00:35:26.444 | 70.00th=[ 59], 80.00th=[ 62], 90.00th=[ 71], 95.00th=[ 74], 00:35:26.444 | 99.00th=[ 96], 99.50th=[ 96], 99.90th=[ 128], 99.95th=[ 128], 00:35:26.444 | 99.99th=[ 128] 00:35:26.444 bw ( KiB/s): min= 1024, max= 1536, per=3.92%, avg=1214.90, stdev=152.80, samples=20 00:35:26.444 iops : min= 256, max= 384, avg=303.70, stdev=38.23, samples=20 00:35:26.444 lat (msec) : 50=55.58%, 100=44.12%, 250=0.29% 00:35:26.444 cpu : usr=38.73%, sys=0.44%, ctx=1128, majf=0, minf=9 00:35:26.444 IO depths : 1=2.1%, 2=5.0%, 4=14.0%, 8=67.7%, 16=11.2%, 32=0.0%, >=64=0.0% 00:35:26.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.444 complete : 0=0.0%, 4=91.3%, 8=3.7%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.444 issued rwts: total=3053,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.444 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.444 filename1: (groupid=0, jobs=1): err= 0: pid=127997: Wed Nov 27 06:37:44 2024 00:35:26.444 read: IOPS=361, BW=1445KiB/s (1479kB/s)(14.1MiB/10007msec) 00:35:26.444 slat (usec): min=3, max=8054, avg=12.85, stdev=200.31 00:35:26.444 clat (usec): min=5654, max=95913, avg=44249.35, stdev=13255.89 00:35:26.444 lat (usec): min=5658, max=95920, avg=44262.20, stdev=13267.06 00:35:26.444 clat percentiles (usec): 00:35:26.444 | 1.00th=[ 8356], 5.00th=[24773], 10.00th=[30278], 20.00th=[32900], 00:35:26.444 | 30.00th=[35914], 40.00th=[39060], 50.00th=[44827], 60.00th=[47973], 00:35:26.444 | 70.00th=[49021], 80.00th=[55837], 90.00th=[60556], 95.00th=[66847], 00:35:26.444 | 99.00th=[79168], 99.50th=[84411], 99.90th=[95945], 99.95th=[95945], 00:35:26.444 | 99.99th=[95945] 00:35:26.444 bw ( KiB/s): min= 1024, max= 1864, per=4.64%, avg=1439.20, stdev=192.31, samples=20 00:35:26.444 iops : min= 256, max= 466, avg=359.80, stdev=48.08, samples=20 00:35:26.444 lat (msec) : 10=1.33%, 20=0.89%, 50=69.29%, 100=28.50% 00:35:26.444 cpu : usr=40.50%, sys=0.35%, ctx=1111, majf=0, minf=9 00:35:26.444 IO depths : 1=0.7%, 2=1.5%, 4=7.6%, 8=77.1%, 16=13.1%, 32=0.0%, >=64=0.0% 00:35:26.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.444 complete : 0=0.0%, 4=89.5%, 8=6.2%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.444 issued rwts: total=3614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.444 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.444 filename1: (groupid=0, jobs=1): err= 0: pid=127998: Wed Nov 27 06:37:44 2024 00:35:26.444 read: IOPS=297, BW=1190KiB/s (1218kB/s)(11.6MiB/10012msec) 00:35:26.444 slat (usec): min=2, max=8014, avg=17.31, stdev=264.41 00:35:26.444 clat (usec): min=24196, max=96204, avg=53625.40, stdev=12958.21 00:35:26.444 lat (usec): min=24202, max=96211, avg=53642.71, stdev=12961.91 00:35:26.444 clat percentiles (usec): 00:35:26.444 | 1.00th=[26608], 5.00th=[33817], 10.00th=[37487], 20.00th=[45351], 00:35:26.444 | 30.00th=[47973], 40.00th=[47973], 50.00th=[49021], 60.00th=[56886], 00:35:26.444 | 70.00th=[60031], 80.00th=[63701], 90.00th=[71828], 95.00th=[74974], 00:35:26.444 | 99.00th=[86508], 99.50th=[87557], 99.90th=[95945], 99.95th=[95945], 00:35:26.444 | 99.99th=[95945] 00:35:26.444 bw ( KiB/s): min= 952, max= 1328, per=3.82%, avg=1184.80, stdev=112.66, samples=20 00:35:26.444 iops : min= 238, max= 332, avg=296.20, stdev=28.16, samples=20 00:35:26.444 lat (msec) : 50=54.20%, 100=45.80% 00:35:26.444 cpu : usr=35.99%, sys=0.44%, ctx=966, majf=0, minf=9 00:35:26.444 IO depths : 1=2.0%, 2=4.9%, 4=14.1%, 8=67.7%, 16=11.2%, 32=0.0%, >=64=0.0% 00:35:26.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.444 complete : 0=0.0%, 4=91.2%, 8=3.9%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.444 issued rwts: total=2978,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.444 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.444 filename1: (groupid=0, jobs=1): err= 0: pid=127999: Wed Nov 27 06:37:44 2024 00:35:26.444 read: IOPS=323, BW=1294KiB/s (1325kB/s)(12.7MiB/10034msec) 00:35:26.444 slat (usec): min=5, max=8016, avg=17.49, stdev=281.05 00:35:26.444 clat (msec): min=16, max=104, avg=49.29, stdev=14.23 00:35:26.444 lat (msec): min=16, max=104, avg=49.31, stdev=14.24 00:35:26.444 clat percentiles (msec): 00:35:26.444 | 1.00th=[ 23], 5.00th=[ 28], 10.00th=[ 34], 20.00th=[ 36], 00:35:26.444 | 30.00th=[ 40], 40.00th=[ 47], 50.00th=[ 48], 60.00th=[ 51], 00:35:26.444 | 70.00th=[ 58], 80.00th=[ 61], 90.00th=[ 68], 95.00th=[ 73], 00:35:26.444 | 99.00th=[ 94], 99.50th=[ 100], 99.90th=[ 105], 99.95th=[ 105], 00:35:26.444 | 99.99th=[ 105] 00:35:26.444 bw ( KiB/s): min= 1072, max= 1648, per=4.17%, avg=1291.60, stdev=152.46, samples=20 00:35:26.444 iops : min= 268, max= 412, avg=322.90, stdev=38.12, samples=20 00:35:26.444 lat (msec) : 20=0.49%, 50=59.35%, 100=39.66%, 250=0.49% 00:35:26.444 cpu : usr=33.37%, sys=0.41%, ctx=1047, majf=0, minf=9 00:35:26.444 IO depths : 1=0.9%, 2=2.3%, 4=10.6%, 8=73.8%, 16=12.3%, 32=0.0%, >=64=0.0% 00:35:26.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.444 complete : 0=0.0%, 4=89.8%, 8=5.4%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.444 issued rwts: total=3245,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.444 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.444 filename1: (groupid=0, jobs=1): err= 0: pid=128000: Wed Nov 27 06:37:44 2024 00:35:26.444 read: IOPS=302, BW=1210KiB/s (1239kB/s)(11.8MiB/10012msec) 00:35:26.444 slat (usec): min=3, max=8035, avg=13.56, stdev=206.11 00:35:26.444 clat (usec): min=12713, max=98855, avg=52795.58, stdev=13758.75 00:35:26.444 lat (usec): min=12719, max=98861, avg=52809.14, stdev=13756.15 00:35:26.444 clat percentiles (usec): 00:35:26.444 | 1.00th=[24249], 5.00th=[32900], 10.00th=[35914], 20.00th=[40109], 00:35:26.444 | 30.00th=[47449], 40.00th=[47973], 50.00th=[48497], 60.00th=[55837], 00:35:26.444 | 70.00th=[60031], 80.00th=[62129], 90.00th=[71828], 95.00th=[74974], 00:35:26.444 | 99.00th=[88605], 99.50th=[94897], 99.90th=[99091], 99.95th=[99091], 00:35:26.444 | 99.99th=[99091] 00:35:26.444 bw ( KiB/s): min= 896, max= 1456, per=3.89%, avg=1204.65, stdev=149.33, samples=20 00:35:26.444 iops : min= 224, max= 364, avg=301.15, stdev=37.35, samples=20 00:35:26.444 lat (msec) : 20=0.23%, 50=53.67%, 100=46.10% 00:35:26.444 cpu : usr=34.90%, sys=0.48%, ctx=937, majf=0, minf=9 00:35:26.444 IO depths : 1=1.6%, 2=4.0%, 4=13.2%, 8=69.5%, 16=11.8%, 32=0.0%, >=64=0.0% 00:35:26.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.444 complete : 0=0.0%, 4=90.9%, 8=4.3%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.444 issued rwts: total=3028,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.444 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.444 filename1: (groupid=0, jobs=1): err= 0: pid=128001: Wed Nov 27 06:37:44 2024 00:35:26.444 read: IOPS=322, BW=1289KiB/s (1320kB/s)(12.6MiB/10040msec) 00:35:26.444 slat (usec): min=3, max=8018, avg=18.02, stdev=281.54 00:35:26.444 clat (usec): min=21318, max=92900, avg=49488.47, stdev=13228.47 00:35:26.444 lat (usec): min=21326, max=92906, avg=49506.49, stdev=13234.73 00:35:26.444 clat percentiles (usec): 00:35:26.444 | 1.00th=[22938], 5.00th=[27395], 10.00th=[33817], 20.00th=[35914], 00:35:26.444 | 30.00th=[44827], 40.00th=[47973], 50.00th=[47973], 60.00th=[49546], 00:35:26.444 | 70.00th=[57410], 80.00th=[60031], 90.00th=[68682], 95.00th=[71828], 00:35:26.444 | 99.00th=[84411], 99.50th=[88605], 99.90th=[92799], 99.95th=[92799], 00:35:26.444 | 99.99th=[92799] 00:35:26.444 bw ( KiB/s): min= 1080, max= 1472, per=4.15%, avg=1287.60, stdev=107.91, samples=20 00:35:26.444 iops : min= 270, max= 368, avg=321.90, stdev=26.98, samples=20 00:35:26.444 lat (msec) : 50=60.49%, 100=39.51% 00:35:26.444 cpu : usr=33.29%, sys=0.41%, ctx=926, majf=0, minf=9 00:35:26.444 IO depths : 1=1.1%, 2=2.7%, 4=10.5%, 8=73.6%, 16=12.1%, 32=0.0%, >=64=0.0% 00:35:26.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.444 complete : 0=0.0%, 4=90.0%, 8=5.2%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.444 issued rwts: total=3235,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.444 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.444 filename1: (groupid=0, jobs=1): err= 0: pid=128002: Wed Nov 27 06:37:44 2024 00:35:26.444 read: IOPS=319, BW=1279KiB/s (1310kB/s)(12.5MiB/10018msec) 00:35:26.444 slat (usec): min=2, max=8014, avg=10.05, stdev=141.55 00:35:26.445 clat (usec): min=20382, max=93332, avg=49954.72, stdev=13258.98 00:35:26.445 lat (usec): min=20389, max=93338, avg=49964.76, stdev=13261.91 00:35:26.445 clat percentiles (usec): 00:35:26.445 | 1.00th=[24249], 5.00th=[30802], 10.00th=[32113], 20.00th=[38011], 00:35:26.445 | 30.00th=[41681], 40.00th=[47973], 50.00th=[47973], 60.00th=[51643], 00:35:26.445 | 70.00th=[56886], 80.00th=[60556], 90.00th=[67634], 95.00th=[71828], 00:35:26.445 | 99.00th=[83362], 99.50th=[85459], 99.90th=[92799], 99.95th=[93848], 00:35:26.445 | 99.99th=[93848] 00:35:26.445 bw ( KiB/s): min= 1024, max= 1632, per=4.11%, avg=1274.90, stdev=166.72, samples=20 00:35:26.445 iops : min= 256, max= 408, avg=318.70, stdev=41.68, samples=20 00:35:26.445 lat (msec) : 50=56.85%, 100=43.15% 00:35:26.445 cpu : usr=40.68%, sys=0.31%, ctx=1170, majf=0, minf=9 00:35:26.445 IO depths : 1=1.2%, 2=2.7%, 4=9.8%, 8=73.6%, 16=12.6%, 32=0.0%, >=64=0.0% 00:35:26.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.445 complete : 0=0.0%, 4=90.1%, 8=5.6%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.445 issued rwts: total=3203,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.445 filename2: (groupid=0, jobs=1): err= 0: pid=128003: Wed Nov 27 06:37:44 2024 00:35:26.445 read: IOPS=295, BW=1181KiB/s (1209kB/s)(11.5MiB/10012msec) 00:35:26.445 slat (usec): min=3, max=8017, avg=13.15, stdev=208.37 00:35:26.445 clat (msec): min=24, max=107, avg=54.11, stdev=12.89 00:35:26.445 lat (msec): min=24, max=107, avg=54.13, stdev=12.89 00:35:26.445 clat percentiles (msec): 00:35:26.445 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 46], 00:35:26.445 | 30.00th=[ 48], 40.00th=[ 48], 50.00th=[ 52], 60.00th=[ 58], 00:35:26.445 | 70.00th=[ 61], 80.00th=[ 63], 90.00th=[ 72], 95.00th=[ 78], 00:35:26.445 | 99.00th=[ 89], 99.50th=[ 96], 99.90th=[ 108], 99.95th=[ 108], 00:35:26.445 | 99.99th=[ 108] 00:35:26.445 bw ( KiB/s): min= 1024, max= 1408, per=3.80%, avg=1178.00, stdev=105.56, samples=20 00:35:26.445 iops : min= 256, max= 352, avg=294.50, stdev=26.39, samples=20 00:35:26.445 lat (msec) : 50=48.09%, 100=51.61%, 250=0.30% 00:35:26.445 cpu : usr=33.87%, sys=0.24%, ctx=970, majf=0, minf=9 00:35:26.445 IO depths : 1=1.6%, 2=4.0%, 4=13.8%, 8=68.5%, 16=12.1%, 32=0.0%, >=64=0.0% 00:35:26.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.445 complete : 0=0.0%, 4=91.4%, 8=4.1%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.445 issued rwts: total=2955,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.445 filename2: (groupid=0, jobs=1): err= 0: pid=128004: Wed Nov 27 06:37:44 2024 00:35:26.445 read: IOPS=331, BW=1325KiB/s (1356kB/s)(13.0MiB/10029msec) 00:35:26.445 slat (usec): min=3, max=8020, avg=17.52, stdev=277.88 00:35:26.445 clat (usec): min=17064, max=97115, avg=48152.63, stdev=13390.07 00:35:26.445 lat (usec): min=17070, max=97127, avg=48170.15, stdev=13392.54 00:35:26.445 clat percentiles (usec): 00:35:26.445 | 1.00th=[22938], 5.00th=[30016], 10.00th=[32113], 20.00th=[35390], 00:35:26.445 | 30.00th=[38536], 40.00th=[44827], 50.00th=[47973], 60.00th=[49546], 00:35:26.445 | 70.00th=[55837], 80.00th=[60031], 90.00th=[66847], 95.00th=[71828], 00:35:26.445 | 99.00th=[83362], 99.50th=[88605], 99.90th=[96994], 99.95th=[96994], 00:35:26.445 | 99.99th=[96994] 00:35:26.445 bw ( KiB/s): min= 984, max= 1632, per=4.26%, avg=1321.85, stdev=169.91, samples=20 00:35:26.445 iops : min= 246, max= 408, avg=330.45, stdev=42.47, samples=20 00:35:26.445 lat (msec) : 20=0.30%, 50=60.13%, 100=39.57% 00:35:26.445 cpu : usr=39.99%, sys=0.31%, ctx=1270, majf=0, minf=9 00:35:26.445 IO depths : 1=0.7%, 2=1.5%, 4=7.9%, 8=76.9%, 16=13.0%, 32=0.0%, >=64=0.0% 00:35:26.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.445 complete : 0=0.0%, 4=89.4%, 8=6.2%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.445 issued rwts: total=3321,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.445 filename2: (groupid=0, jobs=1): err= 0: pid=128005: Wed Nov 27 06:37:44 2024 00:35:26.445 read: IOPS=328, BW=1314KiB/s (1346kB/s)(12.9MiB/10015msec) 00:35:26.445 slat (usec): min=3, max=8099, avg=11.04, stdev=141.16 00:35:26.445 clat (msec): min=6, max=108, avg=48.64, stdev=14.33 00:35:26.445 lat (msec): min=15, max=108, avg=48.65, stdev=14.32 00:35:26.445 clat percentiles (msec): 00:35:26.445 | 1.00th=[ 24], 5.00th=[ 28], 10.00th=[ 33], 20.00th=[ 36], 00:35:26.445 | 30.00th=[ 40], 40.00th=[ 47], 50.00th=[ 48], 60.00th=[ 48], 00:35:26.445 | 70.00th=[ 55], 80.00th=[ 61], 90.00th=[ 71], 95.00th=[ 72], 00:35:26.445 | 99.00th=[ 85], 99.50th=[ 96], 99.90th=[ 109], 99.95th=[ 109], 00:35:26.445 | 99.99th=[ 109] 00:35:26.445 bw ( KiB/s): min= 896, max= 1936, per=4.22%, avg=1309.65, stdev=213.30, samples=20 00:35:26.445 iops : min= 224, max= 484, avg=327.40, stdev=53.34, samples=20 00:35:26.445 lat (msec) : 10=0.03%, 20=0.43%, 50=66.47%, 100=32.92%, 250=0.15% 00:35:26.445 cpu : usr=35.70%, sys=0.32%, ctx=955, majf=0, minf=9 00:35:26.445 IO depths : 1=1.1%, 2=2.7%, 4=10.0%, 8=73.8%, 16=12.3%, 32=0.0%, >=64=0.0% 00:35:26.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.445 complete : 0=0.0%, 4=90.1%, 8=5.3%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.445 issued rwts: total=3290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.445 filename2: (groupid=0, jobs=1): err= 0: pid=128006: Wed Nov 27 06:37:44 2024 00:35:26.445 read: IOPS=337, BW=1351KiB/s (1383kB/s)(13.2MiB/10024msec) 00:35:26.445 slat (usec): min=3, max=8016, avg=14.66, stdev=188.65 00:35:26.445 clat (usec): min=19789, max=90158, avg=47299.26, stdev=12819.49 00:35:26.445 lat (usec): min=19795, max=90164, avg=47313.92, stdev=12822.90 00:35:26.445 clat percentiles (usec): 00:35:26.445 | 1.00th=[24249], 5.00th=[29492], 10.00th=[31851], 20.00th=[34341], 00:35:26.445 | 30.00th=[38011], 40.00th=[44827], 50.00th=[47973], 60.00th=[49021], 00:35:26.445 | 70.00th=[54264], 80.00th=[57934], 90.00th=[63177], 95.00th=[69731], 00:35:26.445 | 99.00th=[81265], 99.50th=[84411], 99.90th=[89654], 99.95th=[89654], 00:35:26.445 | 99.99th=[89654] 00:35:26.445 bw ( KiB/s): min= 1080, max= 1664, per=4.35%, avg=1347.60, stdev=168.12, samples=20 00:35:26.445 iops : min= 270, max= 416, avg=336.90, stdev=42.03, samples=20 00:35:26.445 lat (msec) : 20=0.18%, 50=61.89%, 100=37.93% 00:35:26.445 cpu : usr=43.64%, sys=0.43%, ctx=1170, majf=0, minf=9 00:35:26.445 IO depths : 1=1.3%, 2=2.9%, 4=10.0%, 8=73.6%, 16=12.1%, 32=0.0%, >=64=0.0% 00:35:26.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.445 complete : 0=0.0%, 4=90.1%, 8=5.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.445 issued rwts: total=3385,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.445 filename2: (groupid=0, jobs=1): err= 0: pid=128007: Wed Nov 27 06:37:44 2024 00:35:26.445 read: IOPS=352, BW=1410KiB/s (1444kB/s)(13.8MiB/10022msec) 00:35:26.445 slat (usec): min=5, max=8015, avg=17.45, stdev=256.57 00:35:26.445 clat (msec): min=16, max=129, avg=45.20, stdev=14.22 00:35:26.445 lat (msec): min=16, max=129, avg=45.22, stdev=14.22 00:35:26.445 clat percentiles (msec): 00:35:26.445 | 1.00th=[ 19], 5.00th=[ 26], 10.00th=[ 31], 20.00th=[ 33], 00:35:26.445 | 30.00th=[ 36], 40.00th=[ 40], 50.00th=[ 46], 60.00th=[ 48], 00:35:26.445 | 70.00th=[ 51], 80.00th=[ 56], 90.00th=[ 64], 95.00th=[ 72], 00:35:26.445 | 99.00th=[ 93], 99.50th=[ 94], 99.90th=[ 130], 99.95th=[ 130], 00:35:26.445 | 99.99th=[ 130] 00:35:26.445 bw ( KiB/s): min= 1024, max= 1968, per=4.54%, avg=1408.60, stdev=279.27, samples=20 00:35:26.445 iops : min= 256, max= 492, avg=352.15, stdev=69.82, samples=20 00:35:26.445 lat (msec) : 20=1.27%, 50=68.86%, 100=29.72%, 250=0.14% 00:35:26.445 cpu : usr=44.82%, sys=0.46%, ctx=1258, majf=0, minf=9 00:35:26.445 IO depths : 1=1.8%, 2=3.9%, 4=12.4%, 8=70.5%, 16=11.4%, 32=0.0%, >=64=0.0% 00:35:26.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.445 complete : 0=0.0%, 4=90.5%, 8=4.6%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.445 issued rwts: total=3533,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.445 filename2: (groupid=0, jobs=1): err= 0: pid=128008: Wed Nov 27 06:37:44 2024 00:35:26.445 read: IOPS=300, BW=1203KiB/s (1232kB/s)(11.8MiB/10013msec) 00:35:26.445 slat (usec): min=2, max=5016, avg=13.36, stdev=144.83 00:35:26.445 clat (msec): min=13, max=107, avg=53.09, stdev=13.00 00:35:26.445 lat (msec): min=13, max=107, avg=53.10, stdev=13.00 00:35:26.445 clat percentiles (msec): 00:35:26.445 | 1.00th=[ 27], 5.00th=[ 34], 10.00th=[ 38], 20.00th=[ 45], 00:35:26.445 | 30.00th=[ 47], 40.00th=[ 50], 50.00th=[ 51], 60.00th=[ 54], 00:35:26.445 | 70.00th=[ 58], 80.00th=[ 63], 90.00th=[ 71], 95.00th=[ 75], 00:35:26.445 | 99.00th=[ 95], 99.50th=[ 97], 99.90th=[ 108], 99.95th=[ 108], 00:35:26.445 | 99.99th=[ 108] 00:35:26.445 bw ( KiB/s): min= 936, max= 1528, per=3.86%, avg=1197.75, stdev=129.69, samples=20 00:35:26.445 iops : min= 234, max= 382, avg=299.40, stdev=32.44, samples=20 00:35:26.445 lat (msec) : 20=0.53%, 50=46.93%, 100=52.24%, 250=0.30% 00:35:26.445 cpu : usr=40.68%, sys=0.42%, ctx=1615, majf=0, minf=9 00:35:26.445 IO depths : 1=2.3%, 2=5.8%, 4=16.8%, 8=64.6%, 16=10.5%, 32=0.0%, >=64=0.0% 00:35:26.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.445 complete : 0=0.0%, 4=91.8%, 8=2.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.445 issued rwts: total=3011,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.445 filename2: (groupid=0, jobs=1): err= 0: pid=128009: Wed Nov 27 06:37:44 2024 00:35:26.445 read: IOPS=315, BW=1261KiB/s (1291kB/s)(12.3MiB/10015msec) 00:35:26.445 slat (usec): min=3, max=4014, avg= 8.99, stdev=71.49 00:35:26.445 clat (usec): min=17775, max=95905, avg=50690.78, stdev=13358.05 00:35:26.445 lat (usec): min=17780, max=95911, avg=50699.77, stdev=13358.19 00:35:26.445 clat percentiles (usec): 00:35:26.445 | 1.00th=[23987], 5.00th=[31065], 10.00th=[33817], 20.00th=[37487], 00:35:26.445 | 30.00th=[44827], 40.00th=[47973], 50.00th=[49021], 60.00th=[52167], 00:35:26.445 | 70.00th=[57934], 80.00th=[61080], 90.00th=[68682], 95.00th=[72877], 00:35:26.445 | 99.00th=[86508], 99.50th=[92799], 99.90th=[95945], 99.95th=[95945], 00:35:26.445 | 99.99th=[95945] 00:35:26.445 bw ( KiB/s): min= 1112, max= 1648, per=4.05%, avg=1256.00, stdev=120.80, samples=20 00:35:26.445 iops : min= 278, max= 412, avg=314.00, stdev=30.20, samples=20 00:35:26.445 lat (msec) : 20=0.16%, 50=53.26%, 100=46.58% 00:35:26.445 cpu : usr=38.61%, sys=0.47%, ctx=1155, majf=0, minf=9 00:35:26.445 IO depths : 1=1.6%, 2=3.9%, 4=12.7%, 8=70.2%, 16=11.5%, 32=0.0%, >=64=0.0% 00:35:26.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.446 complete : 0=0.0%, 4=90.7%, 8=4.3%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.446 issued rwts: total=3156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.446 filename2: (groupid=0, jobs=1): err= 0: pid=128010: Wed Nov 27 06:37:44 2024 00:35:26.446 read: IOPS=297, BW=1189KiB/s (1218kB/s)(11.6MiB/10008msec) 00:35:26.446 slat (usec): min=3, max=8035, avg=20.75, stdev=302.71 00:35:26.446 clat (msec): min=8, max=107, avg=53.69, stdev=12.71 00:35:26.446 lat (msec): min=8, max=107, avg=53.71, stdev=12.70 00:35:26.446 clat percentiles (msec): 00:35:26.446 | 1.00th=[ 25], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 47], 00:35:26.446 | 30.00th=[ 48], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 56], 00:35:26.446 | 70.00th=[ 60], 80.00th=[ 61], 90.00th=[ 72], 95.00th=[ 75], 00:35:26.446 | 99.00th=[ 94], 99.50th=[ 96], 99.90th=[ 100], 99.95th=[ 108], 00:35:26.446 | 99.99th=[ 108] 00:35:26.446 bw ( KiB/s): min= 1017, max= 1376, per=3.83%, avg=1186.45, stdev=104.48, samples=20 00:35:26.446 iops : min= 254, max= 344, avg=296.60, stdev=26.14, samples=20 00:35:26.446 lat (msec) : 10=0.44%, 20=0.10%, 50=50.87%, 100=48.52%, 250=0.07% 00:35:26.446 cpu : usr=35.51%, sys=0.28%, ctx=953, majf=0, minf=9 00:35:26.446 IO depths : 1=2.2%, 2=5.1%, 4=14.5%, 8=67.3%, 16=10.9%, 32=0.0%, >=64=0.0% 00:35:26.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.446 complete : 0=0.0%, 4=91.3%, 8=3.6%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.446 issued rwts: total=2976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:26.446 00:35:26.446 Run status group 0 (all jobs): 00:35:26.446 READ: bw=30.3MiB/s (31.7MB/s), 1181KiB/s-1499KiB/s (1209kB/s-1535kB/s), io=304MiB (319MB), run=10007-10045msec 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:26.446 bdev_null0 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:26.446 [2024-11-27 06:37:44.649707] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:26.446 bdev_null1 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:26.446 06:37:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:26.446 { 00:35:26.446 "params": { 00:35:26.446 "name": "Nvme$subsystem", 00:35:26.446 "trtype": "$TEST_TRANSPORT", 00:35:26.446 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:26.446 "adrfam": "ipv4", 00:35:26.446 "trsvcid": "$NVMF_PORT", 00:35:26.446 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:26.447 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:26.447 "hdgst": ${hdgst:-false}, 00:35:26.447 "ddgst": ${ddgst:-false} 00:35:26.447 }, 00:35:26.447 "method": "bdev_nvme_attach_controller" 00:35:26.447 } 00:35:26.447 EOF 00:35:26.447 )") 00:35:26.447 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:26.447 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:26.447 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:35:26.447 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:26.447 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:26.447 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:26.447 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:26.447 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:26.447 06:37:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:26.447 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:26.447 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:35:26.447 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:26.447 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:26.447 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:26.447 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:26.447 06:37:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:26.447 06:37:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:26.447 { 00:35:26.447 "params": { 00:35:26.447 "name": "Nvme$subsystem", 00:35:26.447 "trtype": "$TEST_TRANSPORT", 00:35:26.447 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:26.447 "adrfam": "ipv4", 00:35:26.447 "trsvcid": "$NVMF_PORT", 00:35:26.447 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:26.447 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:26.447 "hdgst": ${hdgst:-false}, 00:35:26.447 "ddgst": ${ddgst:-false} 00:35:26.447 }, 00:35:26.447 "method": "bdev_nvme_attach_controller" 00:35:26.447 } 00:35:26.447 EOF 00:35:26.447 )") 00:35:26.447 06:37:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:26.447 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:26.447 06:37:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:26.447 06:37:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:26.447 06:37:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:26.447 06:37:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:26.447 "params": { 00:35:26.447 "name": "Nvme0", 00:35:26.447 "trtype": "tcp", 00:35:26.447 "traddr": "10.0.0.3", 00:35:26.447 "adrfam": "ipv4", 00:35:26.447 "trsvcid": "4420", 00:35:26.447 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:26.447 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:26.447 "hdgst": false, 00:35:26.447 "ddgst": false 00:35:26.447 }, 00:35:26.447 "method": "bdev_nvme_attach_controller" 00:35:26.447 },{ 00:35:26.447 "params": { 00:35:26.447 "name": "Nvme1", 00:35:26.447 "trtype": "tcp", 00:35:26.447 "traddr": "10.0.0.3", 00:35:26.447 "adrfam": "ipv4", 00:35:26.447 "trsvcid": "4420", 00:35:26.447 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:26.447 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:26.447 "hdgst": false, 00:35:26.447 "ddgst": false 00:35:26.447 }, 00:35:26.447 "method": "bdev_nvme_attach_controller" 00:35:26.447 }' 00:35:26.447 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:26.447 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:26.447 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:26.447 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:26.447 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:26.447 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:26.447 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:26.447 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:26.447 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:35:26.447 06:37:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:26.447 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:26.447 ... 00:35:26.447 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:26.447 ... 00:35:26.447 fio-3.35 00:35:26.447 Starting 4 threads 00:35:31.716 00:35:31.716 filename0: (groupid=0, jobs=1): err= 0: pid=128138: Wed Nov 27 06:37:50 2024 00:35:31.716 read: IOPS=2743, BW=21.4MiB/s (22.5MB/s)(107MiB/5003msec) 00:35:31.716 slat (nsec): min=2865, max=17207, avg=5854.24, stdev=721.76 00:35:31.716 clat (usec): min=794, max=10057, avg=2886.50, stdev=216.34 00:35:31.716 lat (usec): min=800, max=10063, avg=2892.36, stdev=216.35 00:35:31.716 clat percentiles (usec): 00:35:31.716 | 1.00th=[ 2835], 5.00th=[ 2835], 10.00th=[ 2868], 20.00th=[ 2868], 00:35:31.716 | 30.00th=[ 2868], 40.00th=[ 2868], 50.00th=[ 2868], 60.00th=[ 2900], 00:35:31.716 | 70.00th=[ 2900], 80.00th=[ 2900], 90.00th=[ 2900], 95.00th=[ 2933], 00:35:31.716 | 99.00th=[ 3032], 99.50th=[ 3589], 99.90th=[ 6259], 99.95th=[10028], 00:35:31.716 | 99.99th=[10028] 00:35:31.716 bw ( KiB/s): min=21632, max=22016, per=25.04%, avg=21971.56, stdev=127.44, samples=9 00:35:31.716 iops : min= 2704, max= 2752, avg=2746.44, stdev=15.93, samples=9 00:35:31.716 lat (usec) : 1000=0.12% 00:35:31.716 lat (msec) : 4=99.77%, 10=0.07%, 20=0.05% 00:35:31.716 cpu : usr=96.02%, sys=2.92%, ctx=5, majf=0, minf=0 00:35:31.716 IO depths : 1=11.9%, 2=25.0%, 4=50.0%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:31.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:31.716 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:31.716 issued rwts: total=13728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:31.716 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:31.716 filename0: (groupid=0, jobs=1): err= 0: pid=128139: Wed Nov 27 06:37:50 2024 00:35:31.716 read: IOPS=2741, BW=21.4MiB/s (22.5MB/s)(107MiB/5001msec) 00:35:31.716 slat (nsec): min=3686, max=30688, avg=7786.61, stdev=3378.72 00:35:31.716 clat (usec): min=1451, max=14796, avg=2878.31, stdev=294.05 00:35:31.716 lat (usec): min=1456, max=14809, avg=2886.10, stdev=293.98 00:35:31.716 clat percentiles (usec): 00:35:31.716 | 1.00th=[ 2769], 5.00th=[ 2802], 10.00th=[ 2835], 20.00th=[ 2835], 00:35:31.716 | 30.00th=[ 2868], 40.00th=[ 2868], 50.00th=[ 2868], 60.00th=[ 2868], 00:35:31.716 | 70.00th=[ 2900], 80.00th=[ 2900], 90.00th=[ 2900], 95.00th=[ 2933], 00:35:31.716 | 99.00th=[ 2999], 99.50th=[ 3032], 99.90th=[ 3589], 99.95th=[14746], 00:35:31.716 | 99.99th=[14746] 00:35:31.716 bw ( KiB/s): min=21376, max=22016, per=25.01%, avg=21944.89, stdev=213.33, samples=9 00:35:31.716 iops : min= 2672, max= 2752, avg=2743.11, stdev=26.67, samples=9 00:35:31.716 lat (msec) : 2=0.06%, 4=99.85%, 10=0.04%, 20=0.06% 00:35:31.716 cpu : usr=96.86%, sys=2.34%, ctx=9, majf=0, minf=9 00:35:31.716 IO depths : 1=12.4%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:31.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:31.716 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:31.716 issued rwts: total=13712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:31.716 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:31.716 filename1: (groupid=0, jobs=1): err= 0: pid=128140: Wed Nov 27 06:37:50 2024 00:35:31.716 read: IOPS=2741, BW=21.4MiB/s (22.5MB/s)(107MiB/5001msec) 00:35:31.716 slat (nsec): min=3070, max=31225, avg=7619.24, stdev=3286.30 00:35:31.716 clat (usec): min=1426, max=15835, avg=2881.31, stdev=304.89 00:35:31.716 lat (usec): min=1434, max=15845, avg=2888.93, stdev=304.80 00:35:31.716 clat percentiles (usec): 00:35:31.716 | 1.00th=[ 2769], 5.00th=[ 2802], 10.00th=[ 2835], 20.00th=[ 2835], 00:35:31.716 | 30.00th=[ 2868], 40.00th=[ 2868], 50.00th=[ 2868], 60.00th=[ 2868], 00:35:31.716 | 70.00th=[ 2900], 80.00th=[ 2900], 90.00th=[ 2900], 95.00th=[ 2933], 00:35:31.716 | 99.00th=[ 2999], 99.50th=[ 3064], 99.90th=[ 3621], 99.95th=[15008], 00:35:31.716 | 99.99th=[15139] 00:35:31.716 bw ( KiB/s): min=21376, max=22016, per=25.01%, avg=21944.89, stdev=213.33, samples=9 00:35:31.716 iops : min= 2672, max= 2752, avg=2743.11, stdev=26.67, samples=9 00:35:31.716 lat (msec) : 2=0.06%, 4=99.88%, 20=0.06% 00:35:31.716 cpu : usr=97.44%, sys=1.80%, ctx=4, majf=0, minf=0 00:35:31.716 IO depths : 1=12.3%, 2=25.0%, 4=50.0%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:31.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:31.716 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:31.716 issued rwts: total=13712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:31.716 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:31.716 filename1: (groupid=0, jobs=1): err= 0: pid=128141: Wed Nov 27 06:37:50 2024 00:35:31.716 read: IOPS=2742, BW=21.4MiB/s (22.5MB/s)(107MiB/5003msec) 00:35:31.716 slat (nsec): min=2976, max=56181, avg=5942.09, stdev=951.06 00:35:31.716 clat (usec): min=1522, max=10012, avg=2887.74, stdev=196.98 00:35:31.716 lat (usec): min=1529, max=10019, avg=2893.69, stdev=197.05 00:35:31.716 clat percentiles (usec): 00:35:31.716 | 1.00th=[ 2835], 5.00th=[ 2835], 10.00th=[ 2868], 20.00th=[ 2868], 00:35:31.716 | 30.00th=[ 2868], 40.00th=[ 2868], 50.00th=[ 2868], 60.00th=[ 2900], 00:35:31.716 | 70.00th=[ 2900], 80.00th=[ 2900], 90.00th=[ 2900], 95.00th=[ 2900], 00:35:31.716 | 99.00th=[ 2999], 99.50th=[ 3097], 99.90th=[ 6063], 99.95th=[ 6718], 00:35:31.716 | 99.99th=[ 7177] 00:35:31.716 bw ( KiB/s): min=21547, max=22016, per=25.02%, avg=21949.67, stdev=156.82, samples=9 00:35:31.716 iops : min= 2693, max= 2752, avg=2743.67, stdev=19.72, samples=9 00:35:31.716 lat (msec) : 2=0.33%, 4=99.18%, 10=0.49%, 20=0.01% 00:35:31.716 cpu : usr=96.02%, sys=2.94%, ctx=84, majf=0, minf=0 00:35:31.716 IO depths : 1=11.8%, 2=25.0%, 4=50.0%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:31.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:31.716 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:31.716 issued rwts: total=13719,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:31.716 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:31.716 00:35:31.716 Run status group 0 (all jobs): 00:35:31.716 READ: bw=85.7MiB/s (89.8MB/s), 21.4MiB/s-21.4MiB/s (22.5MB/s-22.5MB/s), io=429MiB (450MB), run=5001-5003msec 00:35:31.716 06:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:31.716 06:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:31.716 06:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:31.716 06:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:31.716 06:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:31.716 06:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:31.716 06:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.716 06:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:31.716 06:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.716 06:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:31.716 06:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.716 06:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:31.716 06:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.716 06:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:31.716 06:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:31.716 06:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:31.716 06:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:31.716 06:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.716 06:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:31.717 06:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.717 06:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:31.717 06:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.717 06:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:31.717 06:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.717 00:35:31.717 real 0m23.107s 00:35:31.717 user 2m8.344s 00:35:31.717 sys 0m2.589s 00:35:31.717 06:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:31.717 06:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:31.717 ************************************ 00:35:31.717 END TEST fio_dif_rand_params 00:35:31.717 ************************************ 00:35:31.717 06:37:50 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:31.717 06:37:50 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:31.717 06:37:50 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:31.717 06:37:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:31.717 ************************************ 00:35:31.717 START TEST fio_dif_digest 00:35:31.717 ************************************ 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:31.717 bdev_null0 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:31.717 [2024-11-27 06:37:50.673496] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:31.717 { 00:35:31.717 "params": { 00:35:31.717 "name": "Nvme$subsystem", 00:35:31.717 "trtype": "$TEST_TRANSPORT", 00:35:31.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:31.717 "adrfam": "ipv4", 00:35:31.717 "trsvcid": "$NVMF_PORT", 00:35:31.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:31.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:31.717 "hdgst": ${hdgst:-false}, 00:35:31.717 "ddgst": ${ddgst:-false} 00:35:31.717 }, 00:35:31.717 "method": "bdev_nvme_attach_controller" 00:35:31.717 } 00:35:31.717 EOF 00:35:31.717 )") 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:31.717 "params": { 00:35:31.717 "name": "Nvme0", 00:35:31.717 "trtype": "tcp", 00:35:31.717 "traddr": "10.0.0.3", 00:35:31.717 "adrfam": "ipv4", 00:35:31.717 "trsvcid": "4420", 00:35:31.717 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:31.717 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:31.717 "hdgst": true, 00:35:31.717 "ddgst": true 00:35:31.717 }, 00:35:31.717 "method": "bdev_nvme_attach_controller" 00:35:31.717 }' 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:35:31.717 06:37:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:31.717 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:31.717 ... 00:35:31.717 fio-3.35 00:35:31.717 Starting 3 threads 00:35:43.935 00:35:43.935 filename0: (groupid=0, jobs=1): err= 0: pid=128247: Wed Nov 27 06:38:01 2024 00:35:43.935 read: IOPS=232, BW=29.0MiB/s (30.4MB/s)(291MiB/10008msec) 00:35:43.935 slat (nsec): min=5936, max=34429, avg=7331.80, stdev=1422.51 00:35:43.935 clat (usec): min=5279, max=51868, avg=12907.90, stdev=12133.28 00:35:43.935 lat (usec): min=5288, max=51875, avg=12915.23, stdev=12133.25 00:35:43.935 clat percentiles (usec): 00:35:43.935 | 1.00th=[ 6325], 5.00th=[ 7898], 10.00th=[ 8225], 20.00th=[ 8586], 00:35:43.935 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9110], 60.00th=[ 9241], 00:35:43.935 | 70.00th=[ 9372], 80.00th=[ 9634], 90.00th=[10421], 95.00th=[49546], 00:35:43.935 | 99.00th=[50594], 99.50th=[50594], 99.90th=[51643], 99.95th=[51643], 00:35:43.935 | 99.99th=[51643] 00:35:43.935 bw ( KiB/s): min=22016, max=40192, per=27.28%, avg=29817.26, stdev=5638.70, samples=19 00:35:43.935 iops : min= 172, max= 314, avg=232.95, stdev=44.05, samples=19 00:35:43.935 lat (msec) : 10=88.81%, 20=1.38%, 50=6.54%, 100=3.27% 00:35:43.935 cpu : usr=95.51%, sys=3.51%, ctx=9, majf=0, minf=9 00:35:43.935 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:43.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.935 issued rwts: total=2324,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.935 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:43.935 filename0: (groupid=0, jobs=1): err= 0: pid=128248: Wed Nov 27 06:38:01 2024 00:35:43.935 read: IOPS=334, BW=41.9MiB/s (43.9MB/s)(421MiB/10044msec) 00:35:43.935 slat (nsec): min=5929, max=48111, avg=8195.69, stdev=1674.56 00:35:43.935 clat (usec): min=4901, max=53288, avg=8934.48, stdev=3527.40 00:35:43.935 lat (usec): min=4909, max=53297, avg=8942.67, stdev=3527.43 00:35:43.935 clat percentiles (usec): 00:35:43.935 | 1.00th=[ 5538], 5.00th=[ 5932], 10.00th=[ 6128], 20.00th=[ 6390], 00:35:43.935 | 30.00th=[ 6783], 40.00th=[ 7767], 50.00th=[ 9372], 60.00th=[10028], 00:35:43.935 | 70.00th=[10290], 80.00th=[10683], 90.00th=[11076], 95.00th=[11469], 00:35:43.935 | 99.00th=[12387], 99.50th=[44827], 99.90th=[50594], 99.95th=[52167], 00:35:43.935 | 99.99th=[53216] 00:35:43.935 bw ( KiB/s): min=34816, max=48128, per=39.37%, avg=43033.60, stdev=3603.10, samples=20 00:35:43.935 iops : min= 272, max= 376, avg=336.20, stdev=28.15, samples=20 00:35:43.935 lat (msec) : 10=60.79%, 20=38.70%, 50=0.27%, 100=0.24% 00:35:43.935 cpu : usr=93.80%, sys=4.87%, ctx=17, majf=0, minf=9 00:35:43.935 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:43.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.935 issued rwts: total=3364,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.935 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:43.935 filename0: (groupid=0, jobs=1): err= 0: pid=128249: Wed Nov 27 06:38:01 2024 00:35:43.935 read: IOPS=288, BW=36.1MiB/s (37.9MB/s)(361MiB/10004msec) 00:35:43.935 slat (nsec): min=5884, max=38042, avg=7741.69, stdev=1314.80 00:35:43.935 clat (usec): min=3164, max=14922, avg=10378.10, stdev=2587.64 00:35:43.935 lat (usec): min=3170, max=14931, avg=10385.85, stdev=2587.83 00:35:43.935 clat percentiles (usec): 00:35:43.935 | 1.00th=[ 3523], 5.00th=[ 7046], 10.00th=[ 7308], 20.00th=[ 7635], 00:35:43.935 | 30.00th=[ 8029], 40.00th=[ 8979], 50.00th=[11338], 60.00th=[11994], 00:35:43.935 | 70.00th=[12518], 80.00th=[12911], 90.00th=[13304], 95.00th=[13566], 00:35:43.935 | 99.00th=[14091], 99.50th=[14353], 99.90th=[14746], 99.95th=[14877], 00:35:43.935 | 99.99th=[14877] 00:35:43.935 bw ( KiB/s): min=33024, max=42240, per=33.89%, avg=37039.16, stdev=2297.75, samples=19 00:35:43.935 iops : min= 258, max= 330, avg=289.37, stdev=17.95, samples=19 00:35:43.935 lat (msec) : 4=1.77%, 10=42.19%, 20=56.04% 00:35:43.935 cpu : usr=96.36%, sys=2.76%, ctx=8, majf=0, minf=0 00:35:43.935 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:43.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.935 issued rwts: total=2889,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.935 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:43.935 00:35:43.935 Run status group 0 (all jobs): 00:35:43.935 READ: bw=107MiB/s (112MB/s), 29.0MiB/s-41.9MiB/s (30.4MB/s-43.9MB/s), io=1072MiB (1124MB), run=10004-10044msec 00:35:43.935 06:38:01 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:43.935 06:38:01 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:35:43.935 06:38:01 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:35:43.935 06:38:01 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:43.936 06:38:01 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:35:43.936 06:38:01 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:43.936 06:38:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.936 06:38:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:43.936 06:38:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.936 06:38:01 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:43.936 06:38:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.936 06:38:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:43.936 06:38:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.936 00:35:43.936 real 0m10.820s 00:35:43.936 user 0m29.154s 00:35:43.936 sys 0m1.291s 00:35:43.936 06:38:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:43.936 06:38:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:43.936 ************************************ 00:35:43.936 END TEST fio_dif_digest 00:35:43.936 ************************************ 00:35:43.936 06:38:01 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:43.936 06:38:01 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:35:43.936 06:38:01 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:43.936 06:38:01 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:35:43.936 06:38:01 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:43.936 06:38:01 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:35:43.936 06:38:01 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:43.936 06:38:01 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:43.936 rmmod nvme_tcp 00:35:43.936 rmmod nvme_fabrics 00:35:43.936 rmmod nvme_keyring 00:35:43.936 06:38:01 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:43.936 06:38:01 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:35:43.936 06:38:01 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:35:43.936 06:38:01 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 127473 ']' 00:35:43.936 06:38:01 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 127473 00:35:43.936 06:38:01 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 127473 ']' 00:35:43.936 06:38:01 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 127473 00:35:43.936 06:38:01 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:35:43.936 06:38:01 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:43.936 06:38:01 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 127473 00:35:43.936 06:38:01 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:43.936 killing process with pid 127473 00:35:43.936 06:38:01 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:43.936 06:38:01 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 127473' 00:35:43.936 06:38:01 nvmf_dif -- common/autotest_common.sh@973 -- # kill 127473 00:35:43.936 06:38:01 nvmf_dif -- common/autotest_common.sh@978 -- # wait 127473 00:35:43.936 06:38:01 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:43.936 06:38:01 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:43.936 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:43.936 Waiting for block devices as requested 00:35:43.936 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:35:43.936 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:35:43.936 06:38:02 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:43.936 06:38:02 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:43.936 06:38:02 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:35:43.936 06:38:02 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:43.936 06:38:02 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:35:43.936 06:38:02 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:35:43.936 06:38:02 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:43.936 06:38:02 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:35:43.936 06:38:02 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:35:43.936 06:38:02 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:35:43.936 06:38:02 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:35:43.936 06:38:02 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:35:43.936 06:38:02 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:35:43.936 06:38:02 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:35:43.936 06:38:02 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:35:43.936 06:38:02 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:35:43.936 06:38:02 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:35:43.936 06:38:02 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:35:43.936 06:38:02 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:35:43.936 06:38:02 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:43.936 06:38:02 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:43.936 06:38:02 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:35:43.936 06:38:02 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:43.936 06:38:02 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:43.936 06:38:02 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:43.936 06:38:02 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:35:43.936 00:35:43.936 real 0m58.828s 00:35:43.936 user 3m54.546s 00:35:43.936 sys 0m10.406s 00:35:43.936 ************************************ 00:35:43.936 06:38:02 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:43.936 06:38:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:43.936 END TEST nvmf_dif 00:35:43.936 ************************************ 00:35:43.936 06:38:02 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:43.936 06:38:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:43.936 06:38:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:43.936 06:38:02 -- common/autotest_common.sh@10 -- # set +x 00:35:43.936 ************************************ 00:35:43.936 START TEST nvmf_abort_qd_sizes 00:35:43.936 ************************************ 00:35:43.936 06:38:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:43.936 * Looking for test storage... 00:35:43.936 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:35:43.936 06:38:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:43.936 06:38:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:35:43.936 06:38:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:43.936 06:38:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:43.936 06:38:02 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:43.936 06:38:02 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:43.936 06:38:02 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:43.936 06:38:02 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:35:43.936 06:38:02 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:35:43.936 06:38:02 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:35:43.936 06:38:02 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:35:43.936 06:38:02 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:35:43.936 06:38:02 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:35:43.936 06:38:02 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:35:43.936 06:38:02 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:43.936 06:38:02 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:43.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.937 --rc genhtml_branch_coverage=1 00:35:43.937 --rc genhtml_function_coverage=1 00:35:43.937 --rc genhtml_legend=1 00:35:43.937 --rc geninfo_all_blocks=1 00:35:43.937 --rc geninfo_unexecuted_blocks=1 00:35:43.937 00:35:43.937 ' 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:43.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.937 --rc genhtml_branch_coverage=1 00:35:43.937 --rc genhtml_function_coverage=1 00:35:43.937 --rc genhtml_legend=1 00:35:43.937 --rc geninfo_all_blocks=1 00:35:43.937 --rc geninfo_unexecuted_blocks=1 00:35:43.937 00:35:43.937 ' 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:43.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.937 --rc genhtml_branch_coverage=1 00:35:43.937 --rc genhtml_function_coverage=1 00:35:43.937 --rc genhtml_legend=1 00:35:43.937 --rc geninfo_all_blocks=1 00:35:43.937 --rc geninfo_unexecuted_blocks=1 00:35:43.937 00:35:43.937 ' 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:43.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.937 --rc genhtml_branch_coverage=1 00:35:43.937 --rc genhtml_function_coverage=1 00:35:43.937 --rc genhtml_legend=1 00:35:43.937 --rc geninfo_all_blocks=1 00:35:43.937 --rc geninfo_unexecuted_blocks=1 00:35:43.937 00:35:43.937 ' 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:43.937 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:43.937 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:35:43.938 Cannot find device "nvmf_init_br" 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:35:43.938 Cannot find device "nvmf_init_br2" 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:35:43.938 Cannot find device "nvmf_tgt_br" 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:35:43.938 Cannot find device "nvmf_tgt_br2" 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:35:43.938 Cannot find device "nvmf_init_br" 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:35:43.938 Cannot find device "nvmf_init_br2" 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:35:43.938 Cannot find device "nvmf_tgt_br" 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:35:43.938 Cannot find device "nvmf_tgt_br2" 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:35:43.938 Cannot find device "nvmf_br" 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:35:43.938 Cannot find device "nvmf_init_if" 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:35:43.938 Cannot find device "nvmf_init_if2" 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:43.938 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:43.938 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:35:43.938 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:35:43.939 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:35:43.939 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:35:43.939 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:35:43.939 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:35:43.939 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:35:43.939 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:35:43.939 00:35:43.939 --- 10.0.0.3 ping statistics --- 00:35:43.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:43.939 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:35:43.939 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:35:43.939 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:35:43.939 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:35:43.939 00:35:43.939 --- 10.0.0.4 ping statistics --- 00:35:43.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:43.939 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:35:43.939 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:35:43.939 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:43.939 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.014 ms 00:35:43.939 00:35:43.939 --- 10.0.0.1 ping statistics --- 00:35:43.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:43.939 rtt min/avg/max/mdev = 0.014/0.014/0.014/0.000 ms 00:35:43.939 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:35:43.939 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:43.939 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:35:43.939 00:35:43.939 --- 10.0.0.2 ping statistics --- 00:35:43.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:43.939 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:35:43.939 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:43.939 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:35:43.939 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:35:43.939 06:38:02 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:35:44.200 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:44.200 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:35:44.200 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:35:44.200 06:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:44.200 06:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:44.200 06:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:44.200 06:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:44.200 06:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:44.200 06:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:44.200 06:38:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:44.200 06:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:44.200 06:38:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:44.200 06:38:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:44.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:44.200 06:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=128889 00:35:44.200 06:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:44.200 06:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 128889 00:35:44.200 06:38:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 128889 ']' 00:35:44.200 06:38:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:44.200 06:38:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:44.201 06:38:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:44.201 06:38:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:44.201 06:38:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:44.461 [2024-11-27 06:38:03.762014] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:35:44.461 [2024-11-27 06:38:03.762225] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:44.461 [2024-11-27 06:38:03.880186] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:44.461 [2024-11-27 06:38:03.908368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:44.461 [2024-11-27 06:38:03.929457] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:44.462 [2024-11-27 06:38:03.929637] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:44.462 [2024-11-27 06:38:03.929701] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:44.462 [2024-11-27 06:38:03.929728] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:44.462 [2024-11-27 06:38:03.929745] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:44.462 [2024-11-27 06:38:03.930578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:44.462 [2024-11-27 06:38:03.930939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:44.462 [2024-11-27 06:38:03.934976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:44.462 [2024-11-27 06:38:03.935162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:45.400 06:38:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:45.400 06:38:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:35:45.400 06:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:45.400 06:38:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:45.400 06:38:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:45.400 06:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:45.400 06:38:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:45.400 06:38:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:45.400 06:38:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:45.400 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:35:45.400 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:35:45.400 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:45.401 06:38:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:45.401 ************************************ 00:35:45.401 START TEST spdk_target_abort 00:35:45.401 ************************************ 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:45.401 spdk_targetn1 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:45.401 [2024-11-27 06:38:04.824038] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:45.401 [2024-11-27 06:38:04.866051] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:45.401 06:38:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:48.683 Initializing NVMe Controllers 00:35:48.683 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:35:48.683 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:48.683 Initialization complete. Launching workers. 00:35:48.683 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 14805, failed: 0 00:35:48.683 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1116, failed to submit 13689 00:35:48.683 success 707, unsuccessful 409, failed 0 00:35:48.683 06:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:48.683 06:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:51.971 Initializing NVMe Controllers 00:35:51.971 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:35:51.971 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:51.971 Initialization complete. Launching workers. 00:35:51.971 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5974, failed: 0 00:35:51.971 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1239, failed to submit 4735 00:35:51.971 success 285, unsuccessful 954, failed 0 00:35:51.971 06:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:51.971 06:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:55.350 Initializing NVMe Controllers 00:35:55.350 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:35:55.350 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:55.350 Initialization complete. Launching workers. 00:35:55.350 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34818, failed: 0 00:35:55.350 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2638, failed to submit 32180 00:35:55.350 success 485, unsuccessful 2153, failed 0 00:35:55.350 06:38:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:55.350 06:38:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.350 06:38:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:55.350 06:38:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.350 06:38:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:55.350 06:38:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.350 06:38:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:55.919 06:38:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.919 06:38:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 128889 00:35:55.919 06:38:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 128889 ']' 00:35:55.919 06:38:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 128889 00:35:55.919 06:38:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:35:55.919 06:38:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:55.919 06:38:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 128889 00:35:55.919 06:38:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:55.919 killing process with pid 128889 00:35:55.919 06:38:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:55.919 06:38:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 128889' 00:35:55.919 06:38:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 128889 00:35:55.919 06:38:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 128889 00:35:56.177 ************************************ 00:35:56.177 END TEST spdk_target_abort 00:35:56.177 ************************************ 00:35:56.177 00:35:56.177 real 0m10.772s 00:35:56.177 user 0m44.398s 00:35:56.177 sys 0m1.373s 00:35:56.177 06:38:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:56.177 06:38:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:56.177 06:38:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:56.177 06:38:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:56.177 06:38:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:56.177 06:38:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:56.177 ************************************ 00:35:56.177 START TEST kernel_target_abort 00:35:56.177 ************************************ 00:35:56.177 06:38:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:35:56.177 06:38:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:56.177 06:38:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:35:56.177 06:38:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:56.177 06:38:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:56.177 06:38:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.177 06:38:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.177 06:38:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:56.177 06:38:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.177 06:38:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:56.177 06:38:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:56.177 06:38:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:56.177 06:38:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:56.177 06:38:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:56.177 06:38:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:56.177 06:38:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:56.177 06:38:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:56.178 06:38:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:56.178 06:38:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:35:56.178 06:38:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:56.178 06:38:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:56.178 06:38:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:56.178 06:38:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:56.436 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:56.436 Waiting for block devices as requested 00:35:56.436 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:35:56.695 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:35:56.695 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:56.695 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:56.695 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:56.695 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:56.695 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:56.695 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:56.695 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:56.695 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:56.695 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:35:56.695 No valid GPT data, bailing 00:35:56.695 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:56.695 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:35:56.695 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:35:56.695 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:56.695 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:56.695 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:35:56.695 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:35:56.695 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:35:56.695 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:35:56.695 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:56.695 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:35:56.695 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:35:56.695 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:35:56.695 No valid GPT data, bailing 00:35:56.695 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:35:56.957 No valid GPT data, bailing 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:35:56.957 No valid GPT data, bailing 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 --hostid=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 -a 10.0.0.1 -t tcp -s 4420 00:35:56.957 00:35:56.957 Discovery Log Number of Records 2, Generation counter 2 00:35:56.957 =====Discovery Log Entry 0====== 00:35:56.957 trtype: tcp 00:35:56.957 adrfam: ipv4 00:35:56.957 subtype: current discovery subsystem 00:35:56.957 treq: not specified, sq flow control disable supported 00:35:56.957 portid: 1 00:35:56.957 trsvcid: 4420 00:35:56.957 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:56.957 traddr: 10.0.0.1 00:35:56.957 eflags: none 00:35:56.957 sectype: none 00:35:56.957 =====Discovery Log Entry 1====== 00:35:56.957 trtype: tcp 00:35:56.957 adrfam: ipv4 00:35:56.957 subtype: nvme subsystem 00:35:56.957 treq: not specified, sq flow control disable supported 00:35:56.957 portid: 1 00:35:56.957 trsvcid: 4420 00:35:56.957 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:56.957 traddr: 10.0.0.1 00:35:56.957 eflags: none 00:35:56.957 sectype: none 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:56.957 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:56.958 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:56.958 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:56.958 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:56.958 06:38:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:00.261 Initializing NVMe Controllers 00:36:00.261 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:00.261 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:00.261 Initialization complete. Launching workers. 00:36:00.261 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29256, failed: 0 00:36:00.261 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29256, failed to submit 0 00:36:00.261 success 0, unsuccessful 29256, failed 0 00:36:00.261 06:38:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:00.261 06:38:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:03.563 Initializing NVMe Controllers 00:36:03.563 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:03.563 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:03.563 Initialization complete. Launching workers. 00:36:03.563 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 64974, failed: 0 00:36:03.563 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29370, failed to submit 35604 00:36:03.563 success 0, unsuccessful 29370, failed 0 00:36:03.563 06:38:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:03.563 06:38:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:06.854 Initializing NVMe Controllers 00:36:06.854 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:06.854 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:06.854 Initialization complete. Launching workers. 00:36:06.854 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 104397, failed: 0 00:36:06.854 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26180, failed to submit 78217 00:36:06.854 success 0, unsuccessful 26180, failed 0 00:36:06.854 06:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:06.854 06:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:06.854 06:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:36:06.854 06:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:06.854 06:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:06.854 06:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:06.854 06:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:06.854 06:38:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:36:06.854 06:38:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:36:06.854 06:38:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:36:07.113 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:10.413 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:36:10.413 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:36:10.413 00:36:10.413 real 0m14.073s 00:36:10.413 user 0m6.629s 00:36:10.413 sys 0m4.722s 00:36:10.413 06:38:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:10.413 ************************************ 00:36:10.413 END TEST kernel_target_abort 00:36:10.413 ************************************ 00:36:10.413 06:38:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:10.413 06:38:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:10.413 06:38:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:10.413 06:38:29 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:10.413 06:38:29 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:36:10.413 06:38:29 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:10.413 06:38:29 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:36:10.413 06:38:29 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:10.413 06:38:29 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:10.413 rmmod nvme_tcp 00:36:10.413 rmmod nvme_fabrics 00:36:10.413 rmmod nvme_keyring 00:36:10.413 06:38:29 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:10.413 06:38:29 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:36:10.413 06:38:29 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:36:10.413 06:38:29 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 128889 ']' 00:36:10.413 06:38:29 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 128889 00:36:10.413 06:38:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 128889 ']' 00:36:10.413 06:38:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 128889 00:36:10.413 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (128889) - No such process 00:36:10.413 Process with pid 128889 is not found 00:36:10.413 06:38:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 128889 is not found' 00:36:10.413 06:38:29 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:36:10.413 06:38:29 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:36:10.677 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:10.677 Waiting for block devices as requested 00:36:10.677 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:36:10.937 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:36:10.937 06:38:30 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:10.937 06:38:30 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:10.937 06:38:30 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:36:10.937 06:38:30 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:36:10.937 06:38:30 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:10.937 06:38:30 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:36:10.937 06:38:30 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:10.937 06:38:30 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:36:10.937 06:38:30 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:36:10.937 06:38:30 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:36:10.937 06:38:30 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:36:10.937 06:38:30 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:36:10.937 06:38:30 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:36:10.937 06:38:30 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:36:10.937 06:38:30 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:36:10.937 06:38:30 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:36:10.937 06:38:30 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:36:10.937 06:38:30 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:36:10.937 06:38:30 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:36:10.937 06:38:30 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:11.197 06:38:30 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:11.197 06:38:30 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:36:11.197 06:38:30 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:11.197 06:38:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:11.197 06:38:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:11.197 06:38:30 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:36:11.197 00:36:11.197 real 0m28.061s 00:36:11.197 user 0m52.067s 00:36:11.197 sys 0m7.301s 00:36:11.197 ************************************ 00:36:11.197 END TEST nvmf_abort_qd_sizes 00:36:11.197 ************************************ 00:36:11.197 06:38:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:11.197 06:38:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:11.197 06:38:30 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:36:11.197 06:38:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:11.197 06:38:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:11.197 06:38:30 -- common/autotest_common.sh@10 -- # set +x 00:36:11.197 ************************************ 00:36:11.197 START TEST keyring_file 00:36:11.197 ************************************ 00:36:11.197 06:38:30 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:36:11.197 * Looking for test storage... 00:36:11.197 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:36:11.197 06:38:30 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:11.197 06:38:30 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:36:11.197 06:38:30 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:11.457 06:38:30 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:11.457 06:38:30 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:11.457 06:38:30 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:11.457 06:38:30 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:11.457 06:38:30 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:36:11.457 06:38:30 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:36:11.457 06:38:30 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:36:11.457 06:38:30 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:36:11.457 06:38:30 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:36:11.457 06:38:30 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:36:11.457 06:38:30 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:36:11.457 06:38:30 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:11.457 06:38:30 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:36:11.457 06:38:30 keyring_file -- scripts/common.sh@345 -- # : 1 00:36:11.457 06:38:30 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:11.457 06:38:30 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:11.457 06:38:30 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:36:11.457 06:38:30 keyring_file -- scripts/common.sh@353 -- # local d=1 00:36:11.457 06:38:30 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:11.457 06:38:30 keyring_file -- scripts/common.sh@355 -- # echo 1 00:36:11.457 06:38:30 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:36:11.457 06:38:30 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:36:11.457 06:38:30 keyring_file -- scripts/common.sh@353 -- # local d=2 00:36:11.457 06:38:30 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:11.457 06:38:30 keyring_file -- scripts/common.sh@355 -- # echo 2 00:36:11.457 06:38:30 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:36:11.457 06:38:30 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:11.457 06:38:30 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:11.457 06:38:30 keyring_file -- scripts/common.sh@368 -- # return 0 00:36:11.457 06:38:30 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:11.457 06:38:30 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:11.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.457 --rc genhtml_branch_coverage=1 00:36:11.457 --rc genhtml_function_coverage=1 00:36:11.457 --rc genhtml_legend=1 00:36:11.457 --rc geninfo_all_blocks=1 00:36:11.457 --rc geninfo_unexecuted_blocks=1 00:36:11.457 00:36:11.457 ' 00:36:11.457 06:38:30 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:11.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.457 --rc genhtml_branch_coverage=1 00:36:11.457 --rc genhtml_function_coverage=1 00:36:11.457 --rc genhtml_legend=1 00:36:11.457 --rc geninfo_all_blocks=1 00:36:11.457 --rc geninfo_unexecuted_blocks=1 00:36:11.457 00:36:11.457 ' 00:36:11.457 06:38:30 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:11.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.457 --rc genhtml_branch_coverage=1 00:36:11.457 --rc genhtml_function_coverage=1 00:36:11.457 --rc genhtml_legend=1 00:36:11.457 --rc geninfo_all_blocks=1 00:36:11.457 --rc geninfo_unexecuted_blocks=1 00:36:11.457 00:36:11.457 ' 00:36:11.457 06:38:30 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:11.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.457 --rc genhtml_branch_coverage=1 00:36:11.457 --rc genhtml_function_coverage=1 00:36:11.457 --rc genhtml_legend=1 00:36:11.458 --rc geninfo_all_blocks=1 00:36:11.458 --rc geninfo_unexecuted_blocks=1 00:36:11.458 00:36:11.458 ' 00:36:11.458 06:38:30 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:36:11.458 06:38:30 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:11.458 06:38:30 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:36:11.458 06:38:30 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:11.458 06:38:30 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:11.458 06:38:30 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:11.458 06:38:30 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.458 06:38:30 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.458 06:38:30 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.458 06:38:30 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:11.458 06:38:30 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@51 -- # : 0 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:11.458 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:11.458 06:38:30 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:11.458 06:38:30 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:11.458 06:38:30 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:11.458 06:38:30 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:11.458 06:38:30 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:11.458 06:38:30 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:11.458 06:38:30 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:11.458 06:38:30 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:11.458 06:38:30 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:11.458 06:38:30 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:11.458 06:38:30 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:11.458 06:38:30 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:11.458 06:38:30 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.JPJzmEIl26 00:36:11.458 06:38:30 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:11.458 06:38:30 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.JPJzmEIl26 00:36:11.458 06:38:30 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.JPJzmEIl26 00:36:11.458 06:38:30 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.JPJzmEIl26 00:36:11.458 06:38:30 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:11.458 06:38:30 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:11.458 06:38:30 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:11.458 06:38:30 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:11.458 06:38:30 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:11.458 06:38:30 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:11.458 06:38:30 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.hRBgOd0RNz 00:36:11.458 06:38:30 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:11.458 06:38:30 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:11.458 06:38:30 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.hRBgOd0RNz 00:36:11.458 06:38:30 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.hRBgOd0RNz 00:36:11.458 06:38:30 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.hRBgOd0RNz 00:36:11.458 06:38:30 keyring_file -- keyring/file.sh@30 -- # tgtpid=129818 00:36:11.458 06:38:30 keyring_file -- keyring/file.sh@32 -- # waitforlisten 129818 00:36:11.458 06:38:30 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:11.458 06:38:30 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 129818 ']' 00:36:11.458 06:38:30 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:11.458 06:38:30 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:11.458 06:38:30 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:11.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:11.458 06:38:30 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:11.458 06:38:30 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:11.458 [2024-11-27 06:38:30.959976] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:36:11.458 [2024-11-27 06:38:30.960222] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129818 ] 00:36:11.717 [2024-11-27 06:38:31.076582] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:11.717 [2024-11-27 06:38:31.103753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:11.717 [2024-11-27 06:38:31.145794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:12.656 06:38:31 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:12.656 06:38:31 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:12.656 06:38:31 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:12.656 06:38:31 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.656 06:38:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:12.656 [2024-11-27 06:38:31.866441] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:12.656 null0 00:36:12.656 [2024-11-27 06:38:31.898400] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:12.656 [2024-11-27 06:38:31.898634] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:12.656 06:38:31 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.656 06:38:31 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:12.656 06:38:31 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:12.656 06:38:31 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:12.656 06:38:31 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:12.656 06:38:31 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:12.656 06:38:31 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:12.656 06:38:31 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:12.656 06:38:31 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:12.656 06:38:31 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.656 06:38:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:12.656 [2024-11-27 06:38:31.926388] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:12.656 2024/11/27 06:38:31 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:36:12.656 request: 00:36:12.656 { 00:36:12.656 "method": "nvmf_subsystem_add_listener", 00:36:12.656 "params": { 00:36:12.656 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:12.656 "secure_channel": false, 00:36:12.656 "listen_address": { 00:36:12.656 "trtype": "tcp", 00:36:12.656 "traddr": "127.0.0.1", 00:36:12.656 "trsvcid": "4420" 00:36:12.656 } 00:36:12.656 } 00:36:12.657 } 00:36:12.657 Got JSON-RPC error response 00:36:12.657 GoRPCClient: error on JSON-RPC call 00:36:12.657 06:38:31 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:12.657 06:38:31 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:12.657 06:38:31 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:12.657 06:38:31 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:12.657 06:38:31 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:12.657 06:38:31 keyring_file -- keyring/file.sh@47 -- # bperfpid=129852 00:36:12.657 06:38:31 keyring_file -- keyring/file.sh@49 -- # waitforlisten 129852 /var/tmp/bperf.sock 00:36:12.657 06:38:31 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 129852 ']' 00:36:12.657 06:38:31 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:12.657 06:38:31 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:12.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:12.657 06:38:31 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:12.657 06:38:31 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:12.657 06:38:31 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:12.657 06:38:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:12.657 [2024-11-27 06:38:31.977427] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:36:12.657 [2024-11-27 06:38:31.977681] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129852 ] 00:36:12.657 [2024-11-27 06:38:32.095611] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:12.657 [2024-11-27 06:38:32.116533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:12.657 [2024-11-27 06:38:32.146358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:13.597 06:38:32 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:13.597 06:38:32 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:13.597 06:38:32 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.JPJzmEIl26 00:36:13.597 06:38:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.JPJzmEIl26 00:36:13.597 06:38:33 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.hRBgOd0RNz 00:36:13.597 06:38:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.hRBgOd0RNz 00:36:13.857 06:38:33 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:36:13.857 06:38:33 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:13.857 06:38:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:13.857 06:38:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:13.857 06:38:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:14.118 06:38:33 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.JPJzmEIl26 == \/\t\m\p\/\t\m\p\.\J\P\J\z\m\E\I\l\2\6 ]] 00:36:14.118 06:38:33 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:36:14.118 06:38:33 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:36:14.118 06:38:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:14.118 06:38:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:14.118 06:38:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:14.379 06:38:33 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.hRBgOd0RNz == \/\t\m\p\/\t\m\p\.\h\R\B\g\O\d\0\R\N\z ]] 00:36:14.379 06:38:33 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:36:14.379 06:38:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:14.379 06:38:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:14.379 06:38:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:14.379 06:38:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:14.379 06:38:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:14.640 06:38:33 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:14.640 06:38:33 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:36:14.640 06:38:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:14.640 06:38:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:14.640 06:38:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:14.640 06:38:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:14.640 06:38:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:14.902 06:38:34 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:36:14.902 06:38:34 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:14.902 06:38:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:14.902 [2024-11-27 06:38:34.415002] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:15.162 nvme0n1 00:36:15.162 06:38:34 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:36:15.162 06:38:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:15.162 06:38:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:15.162 06:38:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:15.162 06:38:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:15.162 06:38:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:15.423 06:38:34 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:36:15.423 06:38:34 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:36:15.423 06:38:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:15.423 06:38:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:15.423 06:38:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:15.423 06:38:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:15.423 06:38:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:15.682 06:38:34 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:36:15.682 06:38:34 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:15.682 Running I/O for 1 seconds... 00:36:16.616 19305.00 IOPS, 75.41 MiB/s 00:36:16.616 Latency(us) 00:36:16.616 [2024-11-27T06:38:36.146Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:16.616 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:16.616 nvme0n1 : 1.00 19350.46 75.59 0.00 0.00 6602.91 2747.47 19761.62 00:36:16.616 [2024-11-27T06:38:36.146Z] =================================================================================================================== 00:36:16.616 [2024-11-27T06:38:36.146Z] Total : 19350.46 75.59 0.00 0.00 6602.91 2747.47 19761.62 00:36:16.616 { 00:36:16.616 "results": [ 00:36:16.616 { 00:36:16.616 "job": "nvme0n1", 00:36:16.616 "core_mask": "0x2", 00:36:16.616 "workload": "randrw", 00:36:16.616 "percentage": 50, 00:36:16.616 "status": "finished", 00:36:16.616 "queue_depth": 128, 00:36:16.616 "io_size": 4096, 00:36:16.616 "runtime": 1.004317, 00:36:16.616 "iops": 19350.464046710353, 00:36:16.616 "mibps": 75.58775018246232, 00:36:16.616 "io_failed": 0, 00:36:16.616 "io_timeout": 0, 00:36:16.616 "avg_latency_us": 6602.9069282225455, 00:36:16.616 "min_latency_us": 2747.4707692307693, 00:36:16.616 "max_latency_us": 19761.624615384615 00:36:16.616 } 00:36:16.616 ], 00:36:16.616 "core_count": 1 00:36:16.616 } 00:36:16.616 06:38:36 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:16.616 06:38:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:16.874 06:38:36 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:36:16.874 06:38:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:16.874 06:38:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:16.874 06:38:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:16.874 06:38:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:16.874 06:38:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:17.132 06:38:36 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:17.132 06:38:36 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:36:17.132 06:38:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:17.132 06:38:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:17.132 06:38:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:17.132 06:38:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:17.132 06:38:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:17.390 06:38:36 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:36:17.390 06:38:36 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:17.390 06:38:36 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:17.390 06:38:36 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:17.390 06:38:36 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:17.390 06:38:36 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:17.390 06:38:36 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:17.390 06:38:36 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:17.390 06:38:36 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:17.391 06:38:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:17.391 [2024-11-27 06:38:36.892141] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:17.391 [2024-11-27 06:38:36.892935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195460 (107): Transport endpoint is not connected 00:36:17.391 [2024-11-27 06:38:36.893929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2195460 (9): Bad file descriptor 00:36:17.391 [2024-11-27 06:38:36.894928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:17.391 [2024-11-27 06:38:36.894940] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:17.391 [2024-11-27 06:38:36.894945] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:17.391 [2024-11-27 06:38:36.894951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:17.391 2024/11/27 06:38:36 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:36:17.391 request: 00:36:17.391 { 00:36:17.391 "method": "bdev_nvme_attach_controller", 00:36:17.391 "params": { 00:36:17.391 "name": "nvme0", 00:36:17.391 "trtype": "tcp", 00:36:17.391 "traddr": "127.0.0.1", 00:36:17.391 "adrfam": "ipv4", 00:36:17.391 "trsvcid": "4420", 00:36:17.391 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:17.391 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:17.391 "prchk_reftag": false, 00:36:17.391 "prchk_guard": false, 00:36:17.391 "hdgst": false, 00:36:17.391 "ddgst": false, 00:36:17.391 "psk": "key1", 00:36:17.391 "allow_unrecognized_csi": false 00:36:17.391 } 00:36:17.391 } 00:36:17.391 Got JSON-RPC error response 00:36:17.391 GoRPCClient: error on JSON-RPC call 00:36:17.391 06:38:36 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:17.391 06:38:36 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:17.391 06:38:36 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:17.391 06:38:36 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:17.391 06:38:36 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:36:17.391 06:38:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:17.391 06:38:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:17.391 06:38:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:17.391 06:38:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:17.391 06:38:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:17.649 06:38:37 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:17.649 06:38:37 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:36:17.649 06:38:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:17.649 06:38:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:17.649 06:38:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:17.649 06:38:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:17.649 06:38:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:17.907 06:38:37 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:36:17.907 06:38:37 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:36:17.907 06:38:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:18.165 06:38:37 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:36:18.166 06:38:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:18.424 06:38:37 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:36:18.424 06:38:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:18.424 06:38:37 keyring_file -- keyring/file.sh@78 -- # jq length 00:36:18.424 06:38:37 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:36:18.424 06:38:37 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.JPJzmEIl26 00:36:18.424 06:38:37 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.JPJzmEIl26 00:36:18.424 06:38:37 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:18.424 06:38:37 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.JPJzmEIl26 00:36:18.424 06:38:37 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:18.424 06:38:37 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:18.424 06:38:37 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:18.424 06:38:37 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:18.424 06:38:37 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.JPJzmEIl26 00:36:18.424 06:38:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.JPJzmEIl26 00:36:18.682 [2024-11-27 06:38:38.129491] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.JPJzmEIl26': 0100660 00:36:18.682 [2024-11-27 06:38:38.129516] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:18.682 2024/11/27 06:38:38 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.JPJzmEIl26], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:36:18.682 request: 00:36:18.682 { 00:36:18.682 "method": "keyring_file_add_key", 00:36:18.682 "params": { 00:36:18.682 "name": "key0", 00:36:18.682 "path": "/tmp/tmp.JPJzmEIl26" 00:36:18.682 } 00:36:18.682 } 00:36:18.682 Got JSON-RPC error response 00:36:18.682 GoRPCClient: error on JSON-RPC call 00:36:18.682 06:38:38 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:18.682 06:38:38 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:18.682 06:38:38 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:18.682 06:38:38 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:18.682 06:38:38 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.JPJzmEIl26 00:36:18.682 06:38:38 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.JPJzmEIl26 00:36:18.682 06:38:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.JPJzmEIl26 00:36:18.940 06:38:38 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.JPJzmEIl26 00:36:18.940 06:38:38 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:36:18.940 06:38:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:18.940 06:38:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:18.940 06:38:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:18.940 06:38:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:18.940 06:38:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:19.198 06:38:38 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:36:19.198 06:38:38 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:19.198 06:38:38 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:19.198 06:38:38 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:19.198 06:38:38 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:19.198 06:38:38 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:19.199 06:38:38 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:19.199 06:38:38 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:19.199 06:38:38 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:19.199 06:38:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:19.456 [2024-11-27 06:38:38.753611] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.JPJzmEIl26': No such file or directory 00:36:19.456 [2024-11-27 06:38:38.753631] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:19.456 [2024-11-27 06:38:38.753644] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:19.456 [2024-11-27 06:38:38.753649] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:36:19.456 [2024-11-27 06:38:38.753654] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:19.456 [2024-11-27 06:38:38.753658] bdev_nvme.c:6769:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:19.456 2024/11/27 06:38:38 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:36:19.456 request: 00:36:19.456 { 00:36:19.456 "method": "bdev_nvme_attach_controller", 00:36:19.456 "params": { 00:36:19.456 "name": "nvme0", 00:36:19.456 "trtype": "tcp", 00:36:19.456 "traddr": "127.0.0.1", 00:36:19.456 "adrfam": "ipv4", 00:36:19.456 "trsvcid": "4420", 00:36:19.456 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:19.456 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:19.456 "prchk_reftag": false, 00:36:19.456 "prchk_guard": false, 00:36:19.456 "hdgst": false, 00:36:19.456 "ddgst": false, 00:36:19.456 "psk": "key0", 00:36:19.456 "allow_unrecognized_csi": false 00:36:19.456 } 00:36:19.456 } 00:36:19.456 Got JSON-RPC error response 00:36:19.456 GoRPCClient: error on JSON-RPC call 00:36:19.456 06:38:38 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:19.456 06:38:38 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:19.456 06:38:38 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:19.456 06:38:38 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:19.456 06:38:38 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:36:19.457 06:38:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:19.457 06:38:38 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:19.457 06:38:38 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:19.457 06:38:38 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:19.457 06:38:38 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:19.457 06:38:38 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:19.457 06:38:38 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:19.714 06:38:38 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.06ZGUl2OaP 00:36:19.714 06:38:38 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:19.714 06:38:38 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:19.714 06:38:38 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:19.714 06:38:38 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:19.714 06:38:38 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:19.714 06:38:38 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:19.714 06:38:38 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:19.714 06:38:39 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.06ZGUl2OaP 00:36:19.714 06:38:39 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.06ZGUl2OaP 00:36:19.714 06:38:39 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.06ZGUl2OaP 00:36:19.714 06:38:39 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.06ZGUl2OaP 00:36:19.714 06:38:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.06ZGUl2OaP 00:36:19.714 06:38:39 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:19.714 06:38:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:19.971 nvme0n1 00:36:19.971 06:38:39 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:36:19.971 06:38:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:19.971 06:38:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:19.971 06:38:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:19.971 06:38:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:20.229 06:38:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:20.229 06:38:39 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:36:20.229 06:38:39 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:36:20.229 06:38:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:20.486 06:38:39 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:36:20.486 06:38:39 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:36:20.486 06:38:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:20.486 06:38:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:20.486 06:38:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:20.745 06:38:40 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:36:20.745 06:38:40 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:36:20.745 06:38:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:20.745 06:38:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:20.745 06:38:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:20.745 06:38:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:20.745 06:38:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:21.003 06:38:40 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:36:21.003 06:38:40 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:21.003 06:38:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:21.261 06:38:40 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:36:21.261 06:38:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:21.261 06:38:40 keyring_file -- keyring/file.sh@105 -- # jq length 00:36:21.261 06:38:40 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:36:21.261 06:38:40 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.06ZGUl2OaP 00:36:21.261 06:38:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.06ZGUl2OaP 00:36:21.519 06:38:40 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.hRBgOd0RNz 00:36:21.519 06:38:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.hRBgOd0RNz 00:36:21.777 06:38:41 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:21.777 06:38:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:22.035 nvme0n1 00:36:22.035 06:38:41 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:36:22.035 06:38:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:22.293 06:38:41 keyring_file -- keyring/file.sh@113 -- # config='{ 00:36:22.293 "subsystems": [ 00:36:22.293 { 00:36:22.293 "subsystem": "keyring", 00:36:22.293 "config": [ 00:36:22.293 { 00:36:22.293 "method": "keyring_file_add_key", 00:36:22.293 "params": { 00:36:22.293 "name": "key0", 00:36:22.293 "path": "/tmp/tmp.06ZGUl2OaP" 00:36:22.293 } 00:36:22.293 }, 00:36:22.293 { 00:36:22.293 "method": "keyring_file_add_key", 00:36:22.293 "params": { 00:36:22.293 "name": "key1", 00:36:22.293 "path": "/tmp/tmp.hRBgOd0RNz" 00:36:22.293 } 00:36:22.293 } 00:36:22.293 ] 00:36:22.293 }, 00:36:22.293 { 00:36:22.293 "subsystem": "iobuf", 00:36:22.293 "config": [ 00:36:22.293 { 00:36:22.293 "method": "iobuf_set_options", 00:36:22.293 "params": { 00:36:22.293 "enable_numa": false, 00:36:22.293 "large_bufsize": 135168, 00:36:22.293 "large_pool_count": 1024, 00:36:22.293 "small_bufsize": 8192, 00:36:22.293 "small_pool_count": 8192 00:36:22.293 } 00:36:22.293 } 00:36:22.293 ] 00:36:22.293 }, 00:36:22.293 { 00:36:22.293 "subsystem": "sock", 00:36:22.293 "config": [ 00:36:22.293 { 00:36:22.293 "method": "sock_set_default_impl", 00:36:22.293 "params": { 00:36:22.293 "impl_name": "posix" 00:36:22.293 } 00:36:22.293 }, 00:36:22.293 { 00:36:22.293 "method": "sock_impl_set_options", 00:36:22.293 "params": { 00:36:22.293 "enable_ktls": false, 00:36:22.293 "enable_placement_id": 0, 00:36:22.293 "enable_quickack": false, 00:36:22.293 "enable_recv_pipe": true, 00:36:22.293 "enable_zerocopy_send_client": false, 00:36:22.293 "enable_zerocopy_send_server": true, 00:36:22.293 "impl_name": "ssl", 00:36:22.293 "recv_buf_size": 4096, 00:36:22.293 "send_buf_size": 4096, 00:36:22.293 "tls_version": 0, 00:36:22.293 "zerocopy_threshold": 0 00:36:22.293 } 00:36:22.293 }, 00:36:22.293 { 00:36:22.293 "method": "sock_impl_set_options", 00:36:22.293 "params": { 00:36:22.293 "enable_ktls": false, 00:36:22.293 "enable_placement_id": 0, 00:36:22.293 "enable_quickack": false, 00:36:22.293 "enable_recv_pipe": true, 00:36:22.293 "enable_zerocopy_send_client": false, 00:36:22.293 "enable_zerocopy_send_server": true, 00:36:22.293 "impl_name": "posix", 00:36:22.293 "recv_buf_size": 2097152, 00:36:22.293 "send_buf_size": 2097152, 00:36:22.293 "tls_version": 0, 00:36:22.293 "zerocopy_threshold": 0 00:36:22.293 } 00:36:22.293 } 00:36:22.293 ] 00:36:22.293 }, 00:36:22.293 { 00:36:22.293 "subsystem": "vmd", 00:36:22.293 "config": [] 00:36:22.293 }, 00:36:22.293 { 00:36:22.293 "subsystem": "accel", 00:36:22.293 "config": [ 00:36:22.293 { 00:36:22.293 "method": "accel_set_options", 00:36:22.293 "params": { 00:36:22.293 "buf_count": 2048, 00:36:22.293 "large_cache_size": 16, 00:36:22.293 "sequence_count": 2048, 00:36:22.293 "small_cache_size": 128, 00:36:22.293 "task_count": 2048 00:36:22.293 } 00:36:22.293 } 00:36:22.293 ] 00:36:22.293 }, 00:36:22.293 { 00:36:22.293 "subsystem": "bdev", 00:36:22.293 "config": [ 00:36:22.293 { 00:36:22.293 "method": "bdev_set_options", 00:36:22.293 "params": { 00:36:22.293 "bdev_auto_examine": true, 00:36:22.293 "bdev_io_cache_size": 256, 00:36:22.293 "bdev_io_pool_size": 65535, 00:36:22.293 "iobuf_large_cache_size": 16, 00:36:22.293 "iobuf_small_cache_size": 128 00:36:22.293 } 00:36:22.293 }, 00:36:22.293 { 00:36:22.293 "method": "bdev_raid_set_options", 00:36:22.293 "params": { 00:36:22.293 "process_max_bandwidth_mb_sec": 0, 00:36:22.293 "process_window_size_kb": 1024 00:36:22.293 } 00:36:22.293 }, 00:36:22.293 { 00:36:22.293 "method": "bdev_iscsi_set_options", 00:36:22.293 "params": { 00:36:22.293 "timeout_sec": 30 00:36:22.293 } 00:36:22.293 }, 00:36:22.293 { 00:36:22.293 "method": "bdev_nvme_set_options", 00:36:22.293 "params": { 00:36:22.293 "action_on_timeout": "none", 00:36:22.293 "allow_accel_sequence": false, 00:36:22.293 "arbitration_burst": 0, 00:36:22.293 "bdev_retry_count": 3, 00:36:22.293 "ctrlr_loss_timeout_sec": 0, 00:36:22.293 "delay_cmd_submit": true, 00:36:22.293 "dhchap_dhgroups": [ 00:36:22.293 "null", 00:36:22.293 "ffdhe2048", 00:36:22.293 "ffdhe3072", 00:36:22.293 "ffdhe4096", 00:36:22.293 "ffdhe6144", 00:36:22.293 "ffdhe8192" 00:36:22.293 ], 00:36:22.293 "dhchap_digests": [ 00:36:22.293 "sha256", 00:36:22.293 "sha384", 00:36:22.293 "sha512" 00:36:22.293 ], 00:36:22.293 "disable_auto_failback": false, 00:36:22.293 "fast_io_fail_timeout_sec": 0, 00:36:22.293 "generate_uuids": false, 00:36:22.293 "high_priority_weight": 0, 00:36:22.293 "io_path_stat": false, 00:36:22.293 "io_queue_requests": 512, 00:36:22.293 "keep_alive_timeout_ms": 10000, 00:36:22.293 "low_priority_weight": 0, 00:36:22.293 "medium_priority_weight": 0, 00:36:22.293 "nvme_adminq_poll_period_us": 10000, 00:36:22.293 "nvme_error_stat": false, 00:36:22.293 "nvme_ioq_poll_period_us": 0, 00:36:22.293 "rdma_cm_event_timeout_ms": 0, 00:36:22.293 "rdma_max_cq_size": 0, 00:36:22.293 "rdma_srq_size": 0, 00:36:22.293 "reconnect_delay_sec": 0, 00:36:22.293 "timeout_admin_us": 0, 00:36:22.293 "timeout_us": 0, 00:36:22.293 "transport_ack_timeout": 0, 00:36:22.293 "transport_retry_count": 4, 00:36:22.293 "transport_tos": 0 00:36:22.293 } 00:36:22.293 }, 00:36:22.293 { 00:36:22.293 "method": "bdev_nvme_attach_controller", 00:36:22.293 "params": { 00:36:22.293 "adrfam": "IPv4", 00:36:22.293 "ctrlr_loss_timeout_sec": 0, 00:36:22.293 "ddgst": false, 00:36:22.293 "fast_io_fail_timeout_sec": 0, 00:36:22.293 "hdgst": false, 00:36:22.293 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:22.293 "multipath": "multipath", 00:36:22.293 "name": "nvme0", 00:36:22.293 "prchk_guard": false, 00:36:22.293 "prchk_reftag": false, 00:36:22.293 "psk": "key0", 00:36:22.293 "reconnect_delay_sec": 0, 00:36:22.293 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:22.293 "traddr": "127.0.0.1", 00:36:22.293 "trsvcid": "4420", 00:36:22.293 "trtype": "TCP" 00:36:22.293 } 00:36:22.293 }, 00:36:22.293 { 00:36:22.293 "method": "bdev_nvme_set_hotplug", 00:36:22.293 "params": { 00:36:22.293 "enable": false, 00:36:22.293 "period_us": 100000 00:36:22.293 } 00:36:22.293 }, 00:36:22.293 { 00:36:22.293 "method": "bdev_wait_for_examine" 00:36:22.293 } 00:36:22.293 ] 00:36:22.293 }, 00:36:22.293 { 00:36:22.293 "subsystem": "nbd", 00:36:22.293 "config": [] 00:36:22.293 } 00:36:22.293 ] 00:36:22.293 }' 00:36:22.293 06:38:41 keyring_file -- keyring/file.sh@115 -- # killprocess 129852 00:36:22.293 06:38:41 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 129852 ']' 00:36:22.293 06:38:41 keyring_file -- common/autotest_common.sh@958 -- # kill -0 129852 00:36:22.293 06:38:41 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:22.293 06:38:41 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:22.293 06:38:41 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 129852 00:36:22.293 killing process with pid 129852 00:36:22.293 Received shutdown signal, test time was about 1.000000 seconds 00:36:22.293 00:36:22.294 Latency(us) 00:36:22.294 [2024-11-27T06:38:41.824Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:22.294 [2024-11-27T06:38:41.824Z] =================================================================================================================== 00:36:22.294 [2024-11-27T06:38:41.824Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:22.294 06:38:41 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:22.294 06:38:41 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:22.294 06:38:41 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 129852' 00:36:22.294 06:38:41 keyring_file -- common/autotest_common.sh@973 -- # kill 129852 00:36:22.294 06:38:41 keyring_file -- common/autotest_common.sh@978 -- # wait 129852 00:36:22.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:22.294 06:38:41 keyring_file -- keyring/file.sh@118 -- # bperfpid=130308 00:36:22.294 06:38:41 keyring_file -- keyring/file.sh@120 -- # waitforlisten 130308 /var/tmp/bperf.sock 00:36:22.294 06:38:41 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 130308 ']' 00:36:22.294 06:38:41 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:22.294 06:38:41 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:22.294 06:38:41 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:22.294 06:38:41 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:22.294 06:38:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:22.294 06:38:41 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:22.552 06:38:41 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:36:22.552 "subsystems": [ 00:36:22.552 { 00:36:22.552 "subsystem": "keyring", 00:36:22.552 "config": [ 00:36:22.552 { 00:36:22.552 "method": "keyring_file_add_key", 00:36:22.552 "params": { 00:36:22.552 "name": "key0", 00:36:22.552 "path": "/tmp/tmp.06ZGUl2OaP" 00:36:22.552 } 00:36:22.552 }, 00:36:22.552 { 00:36:22.552 "method": "keyring_file_add_key", 00:36:22.552 "params": { 00:36:22.552 "name": "key1", 00:36:22.552 "path": "/tmp/tmp.hRBgOd0RNz" 00:36:22.552 } 00:36:22.552 } 00:36:22.552 ] 00:36:22.552 }, 00:36:22.552 { 00:36:22.552 "subsystem": "iobuf", 00:36:22.552 "config": [ 00:36:22.552 { 00:36:22.552 "method": "iobuf_set_options", 00:36:22.552 "params": { 00:36:22.552 "enable_numa": false, 00:36:22.552 "large_bufsize": 135168, 00:36:22.552 "large_pool_count": 1024, 00:36:22.552 "small_bufsize": 8192, 00:36:22.552 "small_pool_count": 8192 00:36:22.552 } 00:36:22.552 } 00:36:22.552 ] 00:36:22.552 }, 00:36:22.552 { 00:36:22.552 "subsystem": "sock", 00:36:22.552 "config": [ 00:36:22.552 { 00:36:22.552 "method": "sock_set_default_impl", 00:36:22.552 "params": { 00:36:22.552 "impl_name": "posix" 00:36:22.552 } 00:36:22.552 }, 00:36:22.552 { 00:36:22.552 "method": "sock_impl_set_options", 00:36:22.552 "params": { 00:36:22.552 "enable_ktls": false, 00:36:22.552 "enable_placement_id": 0, 00:36:22.552 "enable_quickack": false, 00:36:22.552 "enable_recv_pipe": true, 00:36:22.552 "enable_zerocopy_send_client": false, 00:36:22.552 "enable_zerocopy_send_server": true, 00:36:22.552 "impl_name": "ssl", 00:36:22.552 "recv_buf_size": 4096, 00:36:22.552 "send_buf_size": 4096, 00:36:22.552 "tls_version": 0, 00:36:22.552 "zerocopy_threshold": 0 00:36:22.552 } 00:36:22.552 }, 00:36:22.552 { 00:36:22.552 "method": "sock_impl_set_options", 00:36:22.552 "params": { 00:36:22.552 "enable_ktls": false, 00:36:22.552 "enable_placement_id": 0, 00:36:22.552 "enable_quickack": false, 00:36:22.552 "enable_recv_pipe": true, 00:36:22.552 "enable_zerocopy_send_client": false, 00:36:22.552 "enable_zerocopy_send_server": true, 00:36:22.553 "impl_name": "posix", 00:36:22.553 "recv_buf_size": 2097152, 00:36:22.553 "send_buf_size": 2097152, 00:36:22.553 "tls_version": 0, 00:36:22.553 "zerocopy_threshold": 0 00:36:22.553 } 00:36:22.553 } 00:36:22.553 ] 00:36:22.553 }, 00:36:22.553 { 00:36:22.553 "subsystem": "vmd", 00:36:22.553 "config": [] 00:36:22.553 }, 00:36:22.553 { 00:36:22.553 "subsystem": "accel", 00:36:22.553 "config": [ 00:36:22.553 { 00:36:22.553 "method": "accel_set_options", 00:36:22.553 "params": { 00:36:22.553 "buf_count": 2048, 00:36:22.553 "large_cache_size": 16, 00:36:22.553 "sequence_count": 2048, 00:36:22.553 "small_cache_size": 128, 00:36:22.553 "task_count": 2048 00:36:22.553 } 00:36:22.553 } 00:36:22.553 ] 00:36:22.553 }, 00:36:22.553 { 00:36:22.553 "subsystem": "bdev", 00:36:22.553 "config": [ 00:36:22.553 { 00:36:22.553 "method": "bdev_set_options", 00:36:22.553 "params": { 00:36:22.553 "bdev_auto_examine": true, 00:36:22.553 "bdev_io_cache_size": 256, 00:36:22.553 "bdev_io_pool_size": 65535, 00:36:22.553 "iobuf_large_cache_size": 16, 00:36:22.553 "iobuf_small_cache_size": 128 00:36:22.553 } 00:36:22.553 }, 00:36:22.553 { 00:36:22.553 "method": "bdev_raid_set_options", 00:36:22.553 "params": { 00:36:22.553 "process_max_bandwidth_mb_sec": 0, 00:36:22.553 "process_window_size_kb": 1024 00:36:22.553 } 00:36:22.553 }, 00:36:22.553 { 00:36:22.553 "method": "bdev_iscsi_set_options", 00:36:22.553 "params": { 00:36:22.553 "timeout_sec": 30 00:36:22.553 } 00:36:22.553 }, 00:36:22.553 { 00:36:22.553 "method": "bdev_nvme_set_options", 00:36:22.553 "params": { 00:36:22.553 "action_on_timeout": "none", 00:36:22.553 "allow_accel_sequence": false, 00:36:22.553 "arbitration_burst": 0, 00:36:22.553 "bdev_retry_count": 3, 00:36:22.553 "ctrlr_loss_timeout_sec": 0, 00:36:22.553 "delay_cmd_submit": true, 00:36:22.553 "dhchap_dhgroups": [ 00:36:22.553 "null", 00:36:22.553 "ffdhe2048", 00:36:22.553 "ffdhe3072", 00:36:22.553 "ffdhe4096", 00:36:22.553 "ffdhe6144", 00:36:22.553 "ffdhe8192" 00:36:22.553 ], 00:36:22.553 "dhchap_digests": [ 00:36:22.553 "sha256", 00:36:22.553 "sha384", 00:36:22.553 "sha512" 00:36:22.553 ], 00:36:22.553 "disable_auto_failback": false, 00:36:22.553 "fast_io_fail_timeout_sec": 0, 00:36:22.553 "generate_uuids": false, 00:36:22.553 "high_priority_weight": 0, 00:36:22.553 "io_path_stat": false, 00:36:22.553 "io_queue_requests": 512, 00:36:22.553 "keep_alive_timeout_ms": 10000, 00:36:22.553 "low_priority_weight": 0, 00:36:22.553 "medium_priority_weight": 0, 00:36:22.553 "nvme_adminq_poll_period_us": 10000, 00:36:22.553 "nvme_error_stat": false, 00:36:22.553 "nvme_ioq_poll_period_us": 0, 00:36:22.553 "rdma_cm_event_timeout_ms": 0, 00:36:22.553 "rdma_max_cq_size": 0, 00:36:22.553 "rdma_srq_size": 0, 00:36:22.553 "reconnect_delay_sec": 0, 00:36:22.553 "timeout_admin_us": 0, 00:36:22.553 "timeout_us": 0, 00:36:22.553 "transport_ack_timeout": 0, 00:36:22.553 "transport_retry_count": 4, 00:36:22.553 "transport_tos": 0 00:36:22.553 } 00:36:22.553 }, 00:36:22.553 { 00:36:22.553 "method": "bdev_nvme_attach_controller", 00:36:22.553 "params": { 00:36:22.553 "adrfam": "IPv4", 00:36:22.553 "ctrlr_loss_timeout_sec": 0, 00:36:22.553 "ddgst": false, 00:36:22.553 "fast_io_fail_timeout_sec": 0, 00:36:22.553 "hdgst": false, 00:36:22.553 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:22.553 "multipath": "multipath", 00:36:22.553 "name": "nvme0", 00:36:22.553 "prchk_guard": false, 00:36:22.553 "prchk_reftag": false, 00:36:22.553 "psk": "key0", 00:36:22.553 "reconnect_delay_sec": 0, 00:36:22.553 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:22.553 "traddr": "127.0.0.1", 00:36:22.553 "trsvcid": "4420", 00:36:22.553 "trtype": "TCP" 00:36:22.553 } 00:36:22.553 }, 00:36:22.553 { 00:36:22.553 "method": "bdev_nvme_set_hotplug", 00:36:22.553 "params": { 00:36:22.553 "enable": false, 00:36:22.553 "period_us": 100000 00:36:22.553 } 00:36:22.553 }, 00:36:22.553 { 00:36:22.553 "method": "bdev_wait_for_examine" 00:36:22.553 } 00:36:22.553 ] 00:36:22.553 }, 00:36:22.553 { 00:36:22.553 "subsystem": "nbd", 00:36:22.553 "config": [] 00:36:22.553 } 00:36:22.553 ] 00:36:22.553 }' 00:36:22.553 [2024-11-27 06:38:41.856656] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:36:22.553 [2024-11-27 06:38:41.856721] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130308 ] 00:36:22.553 [2024-11-27 06:38:41.970431] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:22.553 [2024-11-27 06:38:41.994740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:22.553 [2024-11-27 06:38:42.010886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:22.811 [2024-11-27 06:38:42.146813] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:23.377 06:38:42 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:23.377 06:38:42 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:23.377 06:38:42 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:36:23.377 06:38:42 keyring_file -- keyring/file.sh@121 -- # jq length 00:36:23.377 06:38:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:23.652 06:38:42 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:23.652 06:38:42 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:36:23.652 06:38:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:23.652 06:38:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:23.652 06:38:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:23.652 06:38:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:23.652 06:38:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:23.652 06:38:43 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:36:23.652 06:38:43 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:36:23.652 06:38:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:23.652 06:38:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:23.652 06:38:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:23.652 06:38:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:23.652 06:38:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:23.910 06:38:43 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:36:23.910 06:38:43 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:36:23.910 06:38:43 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:36:23.910 06:38:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:24.168 06:38:43 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:36:24.168 06:38:43 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:24.168 06:38:43 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.06ZGUl2OaP /tmp/tmp.hRBgOd0RNz 00:36:24.168 06:38:43 keyring_file -- keyring/file.sh@20 -- # killprocess 130308 00:36:24.168 06:38:43 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 130308 ']' 00:36:24.168 06:38:43 keyring_file -- common/autotest_common.sh@958 -- # kill -0 130308 00:36:24.168 06:38:43 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:24.168 06:38:43 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:24.168 06:38:43 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 130308 00:36:24.168 killing process with pid 130308 00:36:24.168 Received shutdown signal, test time was about 1.000000 seconds 00:36:24.168 00:36:24.168 Latency(us) 00:36:24.168 [2024-11-27T06:38:43.698Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:24.168 [2024-11-27T06:38:43.698Z] =================================================================================================================== 00:36:24.168 [2024-11-27T06:38:43.698Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:24.168 06:38:43 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:24.168 06:38:43 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:24.168 06:38:43 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 130308' 00:36:24.168 06:38:43 keyring_file -- common/autotest_common.sh@973 -- # kill 130308 00:36:24.168 06:38:43 keyring_file -- common/autotest_common.sh@978 -- # wait 130308 00:36:24.168 06:38:43 keyring_file -- keyring/file.sh@21 -- # killprocess 129818 00:36:24.168 06:38:43 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 129818 ']' 00:36:24.168 06:38:43 keyring_file -- common/autotest_common.sh@958 -- # kill -0 129818 00:36:24.168 06:38:43 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:24.168 06:38:43 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:24.168 06:38:43 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 129818 00:36:24.426 killing process with pid 129818 00:36:24.426 06:38:43 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:24.426 06:38:43 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:24.426 06:38:43 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 129818' 00:36:24.426 06:38:43 keyring_file -- common/autotest_common.sh@973 -- # kill 129818 00:36:24.426 06:38:43 keyring_file -- common/autotest_common.sh@978 -- # wait 129818 00:36:24.687 00:36:24.687 real 0m13.368s 00:36:24.687 user 0m32.300s 00:36:24.687 sys 0m2.676s 00:36:24.687 06:38:43 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:24.687 ************************************ 00:36:24.687 END TEST keyring_file 00:36:24.687 06:38:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:24.687 ************************************ 00:36:24.687 06:38:44 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:36:24.687 06:38:44 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:36:24.687 06:38:44 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:24.687 06:38:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:24.687 06:38:44 -- common/autotest_common.sh@10 -- # set +x 00:36:24.687 ************************************ 00:36:24.687 START TEST keyring_linux 00:36:24.687 ************************************ 00:36:24.687 06:38:44 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:36:24.687 Joined session keyring: 25417215 00:36:24.687 * Looking for test storage... 00:36:24.687 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:36:24.687 06:38:44 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:24.687 06:38:44 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:36:24.687 06:38:44 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:24.687 06:38:44 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:24.687 06:38:44 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:24.687 06:38:44 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:24.687 06:38:44 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:24.687 06:38:44 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:36:24.687 06:38:44 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:36:24.687 06:38:44 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:36:24.687 06:38:44 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:36:24.687 06:38:44 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:36:24.687 06:38:44 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:36:24.687 06:38:44 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:36:24.687 06:38:44 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:24.687 06:38:44 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:36:24.687 06:38:44 keyring_linux -- scripts/common.sh@345 -- # : 1 00:36:24.687 06:38:44 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:24.687 06:38:44 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:24.687 06:38:44 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:36:24.687 06:38:44 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:36:24.687 06:38:44 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:24.687 06:38:44 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:36:24.687 06:38:44 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:36:24.687 06:38:44 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:36:24.687 06:38:44 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:36:24.687 06:38:44 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:24.687 06:38:44 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:36:24.687 06:38:44 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:36:24.687 06:38:44 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:24.687 06:38:44 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:24.687 06:38:44 keyring_linux -- scripts/common.sh@368 -- # return 0 00:36:24.687 06:38:44 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:24.687 06:38:44 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:24.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.687 --rc genhtml_branch_coverage=1 00:36:24.687 --rc genhtml_function_coverage=1 00:36:24.687 --rc genhtml_legend=1 00:36:24.687 --rc geninfo_all_blocks=1 00:36:24.687 --rc geninfo_unexecuted_blocks=1 00:36:24.687 00:36:24.687 ' 00:36:24.687 06:38:44 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:24.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.687 --rc genhtml_branch_coverage=1 00:36:24.687 --rc genhtml_function_coverage=1 00:36:24.687 --rc genhtml_legend=1 00:36:24.687 --rc geninfo_all_blocks=1 00:36:24.687 --rc geninfo_unexecuted_blocks=1 00:36:24.687 00:36:24.687 ' 00:36:24.687 06:38:44 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:24.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.687 --rc genhtml_branch_coverage=1 00:36:24.687 --rc genhtml_function_coverage=1 00:36:24.687 --rc genhtml_legend=1 00:36:24.687 --rc geninfo_all_blocks=1 00:36:24.687 --rc geninfo_unexecuted_blocks=1 00:36:24.687 00:36:24.687 ' 00:36:24.687 06:38:44 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:24.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.687 --rc genhtml_branch_coverage=1 00:36:24.687 --rc genhtml_function_coverage=1 00:36:24.687 --rc genhtml_legend=1 00:36:24.687 --rc geninfo_all_blocks=1 00:36:24.687 --rc geninfo_unexecuted_blocks=1 00:36:24.687 00:36:24.687 ' 00:36:24.687 06:38:44 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:36:24.687 06:38:44 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:24.687 06:38:44 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:24.687 06:38:44 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:24.687 06:38:44 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:24.687 06:38:44 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:24.687 06:38:44 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:24.687 06:38:44 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:24.687 06:38:44 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:24.687 06:38:44 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:24.687 06:38:44 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:24.687 06:38:44 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:24.687 06:38:44 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:24.687 06:38:44 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:36:24.687 06:38:44 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=c9bd98e0-69a6-4a21-8ade-4c001aca4c30 00:36:24.687 06:38:44 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:24.687 06:38:44 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:24.687 06:38:44 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:24.687 06:38:44 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:24.687 06:38:44 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:24.688 06:38:44 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:36:24.688 06:38:44 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:24.688 06:38:44 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:24.688 06:38:44 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:24.688 06:38:44 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.688 06:38:44 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.688 06:38:44 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.688 06:38:44 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:24.688 06:38:44 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.688 06:38:44 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:36:24.688 06:38:44 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:24.688 06:38:44 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:24.688 06:38:44 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:24.688 06:38:44 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:24.688 06:38:44 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:24.688 06:38:44 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:24.688 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:24.688 06:38:44 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:24.688 06:38:44 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:24.688 06:38:44 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:24.688 06:38:44 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:24.688 06:38:44 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:24.688 06:38:44 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:24.688 06:38:44 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:24.688 06:38:44 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:24.688 06:38:44 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:24.688 06:38:44 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:24.688 06:38:44 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:24.688 06:38:44 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:24.688 06:38:44 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:24.688 06:38:44 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:24.688 06:38:44 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:24.688 06:38:44 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:24.688 06:38:44 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:24.688 06:38:44 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:36:24.688 06:38:44 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:24.688 06:38:44 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:24.688 06:38:44 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:36:24.688 06:38:44 keyring_linux -- nvmf/common.sh@733 -- # python - 00:36:24.948 06:38:44 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:24.948 /tmp/:spdk-test:key0 00:36:24.948 06:38:44 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:24.948 06:38:44 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:24.948 06:38:44 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:24.948 06:38:44 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:24.948 06:38:44 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:24.948 06:38:44 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:24.948 06:38:44 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:24.948 06:38:44 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:24.948 06:38:44 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:24.948 06:38:44 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:36:24.948 06:38:44 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:24.948 06:38:44 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:36:24.948 06:38:44 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:36:24.948 06:38:44 keyring_linux -- nvmf/common.sh@733 -- # python - 00:36:24.948 06:38:44 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:24.948 /tmp/:spdk-test:key1 00:36:24.948 06:38:44 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:24.948 06:38:44 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=130460 00:36:24.948 06:38:44 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 130460 00:36:24.948 06:38:44 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 130460 ']' 00:36:24.948 06:38:44 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:24.948 06:38:44 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:24.948 06:38:44 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:24.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:24.948 06:38:44 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:24.948 06:38:44 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:24.948 06:38:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:24.948 [2024-11-27 06:38:44.310855] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:36:24.948 [2024-11-27 06:38:44.310933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130460 ] 00:36:24.948 [2024-11-27 06:38:44.425410] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:24.948 [2024-11-27 06:38:44.450208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:25.240 [2024-11-27 06:38:44.478234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:25.859 06:38:45 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:25.859 06:38:45 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:36:25.859 06:38:45 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:25.859 06:38:45 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.859 06:38:45 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:25.859 [2024-11-27 06:38:45.185487] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:25.859 null0 00:36:25.859 [2024-11-27 06:38:45.217459] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:25.859 [2024-11-27 06:38:45.217615] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:25.859 06:38:45 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.859 06:38:45 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:25.859 68164499 00:36:25.859 06:38:45 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:25.859 728632246 00:36:25.859 06:38:45 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=130495 00:36:25.859 06:38:45 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 130495 /var/tmp/bperf.sock 00:36:25.859 06:38:45 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 130495 ']' 00:36:25.859 06:38:45 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:25.860 06:38:45 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:25.860 06:38:45 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:25.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:25.860 06:38:45 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:25.860 06:38:45 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:25.860 06:38:45 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:25.860 [2024-11-27 06:38:45.280678] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.11.0-rc4 initialization... 00:36:25.860 [2024-11-27 06:38:45.280738] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130495 ] 00:36:26.118 [2024-11-27 06:38:45.393821] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc4 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:26.118 [2024-11-27 06:38:45.419073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:26.118 [2024-11-27 06:38:45.435344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:26.684 06:38:46 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:26.684 06:38:46 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:36:26.684 06:38:46 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:26.684 06:38:46 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:26.942 06:38:46 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:26.942 06:38:46 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:27.200 06:38:46 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:27.200 06:38:46 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:27.457 [2024-11-27 06:38:46.794335] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:27.457 nvme0n1 00:36:27.457 06:38:46 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:27.457 06:38:46 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:27.457 06:38:46 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:27.457 06:38:46 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:27.457 06:38:46 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:27.457 06:38:46 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:27.715 06:38:47 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:27.715 06:38:47 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:27.715 06:38:47 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:27.715 06:38:47 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:27.715 06:38:47 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:27.715 06:38:47 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:27.715 06:38:47 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:27.973 06:38:47 keyring_linux -- keyring/linux.sh@25 -- # sn=68164499 00:36:27.973 06:38:47 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:27.973 06:38:47 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:27.973 06:38:47 keyring_linux -- keyring/linux.sh@26 -- # [[ 68164499 == \6\8\1\6\4\4\9\9 ]] 00:36:27.973 06:38:47 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 68164499 00:36:27.973 06:38:47 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:27.973 06:38:47 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:27.973 Running I/O for 1 seconds... 00:36:28.906 21222.00 IOPS, 82.90 MiB/s 00:36:28.906 Latency(us) 00:36:28.906 [2024-11-27T06:38:48.436Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:28.906 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:28.906 nvme0n1 : 1.01 21224.10 82.91 0.00 0.00 6011.07 3806.13 8922.98 00:36:28.906 [2024-11-27T06:38:48.436Z] =================================================================================================================== 00:36:28.906 [2024-11-27T06:38:48.436Z] Total : 21224.10 82.91 0.00 0.00 6011.07 3806.13 8922.98 00:36:28.906 { 00:36:28.906 "results": [ 00:36:28.906 { 00:36:28.906 "job": "nvme0n1", 00:36:28.906 "core_mask": "0x2", 00:36:28.906 "workload": "randread", 00:36:28.906 "status": "finished", 00:36:28.906 "queue_depth": 128, 00:36:28.906 "io_size": 4096, 00:36:28.906 "runtime": 1.005932, 00:36:28.906 "iops": 21224.098646827024, 00:36:28.906 "mibps": 82.90663533916806, 00:36:28.906 "io_failed": 0, 00:36:28.906 "io_timeout": 0, 00:36:28.906 "avg_latency_us": 6011.069119942353, 00:36:28.906 "min_latency_us": 3806.1292307692306, 00:36:28.906 "max_latency_us": 8922.978461538461 00:36:28.906 } 00:36:28.906 ], 00:36:28.906 "core_count": 1 00:36:28.906 } 00:36:28.906 06:38:48 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:28.906 06:38:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:29.164 06:38:48 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:29.164 06:38:48 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:29.164 06:38:48 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:29.164 06:38:48 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:29.164 06:38:48 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:29.164 06:38:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:29.422 06:38:48 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:29.423 06:38:48 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:29.423 06:38:48 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:29.423 06:38:48 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:29.423 06:38:48 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:36:29.423 06:38:48 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:29.423 06:38:48 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:29.423 06:38:48 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:29.423 06:38:48 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:29.423 06:38:48 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:29.423 06:38:48 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:29.423 06:38:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:29.681 [2024-11-27 06:38:49.028927] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:29.681 [2024-11-27 06:38:49.029926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14623e0 (107): Transport endpoint is not connected 00:36:29.681 [2024-11-27 06:38:49.030918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14623e0 (9): Bad file descriptor 00:36:29.681 [2024-11-27 06:38:49.031918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:29.681 [2024-11-27 06:38:49.031932] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:29.681 [2024-11-27 06:38:49.031937] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:29.681 [2024-11-27 06:38:49.031943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:29.681 2024/11/27 06:38:49 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:36:29.681 request: 00:36:29.681 { 00:36:29.681 "method": "bdev_nvme_attach_controller", 00:36:29.681 "params": { 00:36:29.681 "name": "nvme0", 00:36:29.681 "trtype": "tcp", 00:36:29.681 "traddr": "127.0.0.1", 00:36:29.681 "adrfam": "ipv4", 00:36:29.681 "trsvcid": "4420", 00:36:29.681 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:29.681 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:29.681 "prchk_reftag": false, 00:36:29.681 "prchk_guard": false, 00:36:29.681 "hdgst": false, 00:36:29.681 "ddgst": false, 00:36:29.681 "psk": ":spdk-test:key1", 00:36:29.681 "allow_unrecognized_csi": false 00:36:29.681 } 00:36:29.681 } 00:36:29.681 Got JSON-RPC error response 00:36:29.681 GoRPCClient: error on JSON-RPC call 00:36:29.681 06:38:49 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:36:29.681 06:38:49 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:29.681 06:38:49 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:29.681 06:38:49 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:29.681 06:38:49 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:29.681 06:38:49 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:29.681 06:38:49 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:29.681 06:38:49 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:29.681 06:38:49 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:29.681 06:38:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:29.681 06:38:49 keyring_linux -- keyring/linux.sh@33 -- # sn=68164499 00:36:29.681 06:38:49 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 68164499 00:36:29.681 1 links removed 00:36:29.681 06:38:49 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:29.681 06:38:49 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:29.681 06:38:49 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:29.681 06:38:49 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:29.681 06:38:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:29.681 06:38:49 keyring_linux -- keyring/linux.sh@33 -- # sn=728632246 00:36:29.681 06:38:49 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 728632246 00:36:29.681 1 links removed 00:36:29.681 06:38:49 keyring_linux -- keyring/linux.sh@41 -- # killprocess 130495 00:36:29.681 06:38:49 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 130495 ']' 00:36:29.681 06:38:49 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 130495 00:36:29.681 06:38:49 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:36:29.681 06:38:49 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:29.681 06:38:49 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 130495 00:36:29.681 06:38:49 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:29.681 06:38:49 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:29.681 killing process with pid 130495 00:36:29.681 Received shutdown signal, test time was about 1.000000 seconds 00:36:29.682 00:36:29.682 Latency(us) 00:36:29.682 [2024-11-27T06:38:49.212Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:29.682 [2024-11-27T06:38:49.212Z] =================================================================================================================== 00:36:29.682 [2024-11-27T06:38:49.212Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:29.682 06:38:49 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 130495' 00:36:29.682 06:38:49 keyring_linux -- common/autotest_common.sh@973 -- # kill 130495 00:36:29.682 06:38:49 keyring_linux -- common/autotest_common.sh@978 -- # wait 130495 00:36:29.939 06:38:49 keyring_linux -- keyring/linux.sh@42 -- # killprocess 130460 00:36:29.939 06:38:49 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 130460 ']' 00:36:29.939 06:38:49 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 130460 00:36:29.939 06:38:49 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:36:29.939 06:38:49 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:29.939 06:38:49 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 130460 00:36:29.939 06:38:49 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:29.939 06:38:49 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:29.939 killing process with pid 130460 00:36:29.939 06:38:49 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 130460' 00:36:29.939 06:38:49 keyring_linux -- common/autotest_common.sh@973 -- # kill 130460 00:36:29.939 06:38:49 keyring_linux -- common/autotest_common.sh@978 -- # wait 130460 00:36:30.198 00:36:30.198 real 0m5.473s 00:36:30.198 user 0m10.199s 00:36:30.198 sys 0m1.426s 00:36:30.199 06:38:49 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:30.199 06:38:49 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:30.199 ************************************ 00:36:30.199 END TEST keyring_linux 00:36:30.199 ************************************ 00:36:30.199 06:38:49 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:36:30.199 06:38:49 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:36:30.199 06:38:49 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:36:30.199 06:38:49 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:36:30.199 06:38:49 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:36:30.199 06:38:49 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:36:30.199 06:38:49 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:36:30.199 06:38:49 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:30.199 06:38:49 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:30.199 06:38:49 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:30.199 06:38:49 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:30.199 06:38:49 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:30.199 06:38:49 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:30.199 06:38:49 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:30.199 06:38:49 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:36:30.199 06:38:49 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:36:30.199 06:38:49 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:36:30.199 06:38:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:30.199 06:38:49 -- common/autotest_common.sh@10 -- # set +x 00:36:30.199 06:38:49 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:36:30.199 06:38:49 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:36:30.199 06:38:49 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:36:30.199 06:38:49 -- common/autotest_common.sh@10 -- # set +x 00:36:32.109 INFO: APP EXITING 00:36:32.109 INFO: killing all VMs 00:36:32.109 INFO: killing vhost app 00:36:32.109 INFO: EXIT DONE 00:36:32.370 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:32.370 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:36:32.370 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:36:33.313 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:33.313 Cleaning 00:36:33.313 Removing: /var/run/dpdk/spdk0/config 00:36:33.313 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:33.313 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:33.313 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:33.313 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:33.313 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:33.313 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:33.313 Removing: /var/run/dpdk/spdk1/config 00:36:33.313 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:33.313 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:33.313 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:33.313 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:33.313 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:33.313 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:33.313 Removing: /var/run/dpdk/spdk2/config 00:36:33.313 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:33.313 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:33.313 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:33.313 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:33.313 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:33.313 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:33.313 Removing: /var/run/dpdk/spdk3/config 00:36:33.313 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:33.313 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:33.313 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:33.313 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:33.313 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:33.313 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:33.313 Removing: /var/run/dpdk/spdk4/config 00:36:33.313 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:33.313 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:33.313 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:33.313 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:33.313 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:33.313 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:33.313 Removing: /dev/shm/nvmf_trace.0 00:36:33.313 Removing: /dev/shm/spdk_tgt_trace.pid71709 00:36:33.313 Removing: /var/run/dpdk/spdk0 00:36:33.313 Removing: /var/run/dpdk/spdk1 00:36:33.313 Removing: /var/run/dpdk/spdk2 00:36:33.313 Removing: /var/run/dpdk/spdk3 00:36:33.313 Removing: /var/run/dpdk/spdk4 00:36:33.313 Removing: /var/run/dpdk/spdk_pid100162 00:36:33.313 Removing: /var/run/dpdk/spdk_pid100263 00:36:33.313 Removing: /var/run/dpdk/spdk_pid100377 00:36:33.313 Removing: /var/run/dpdk/spdk_pid100764 00:36:33.313 Removing: /var/run/dpdk/spdk_pid101201 00:36:33.313 Removing: /var/run/dpdk/spdk_pid101202 00:36:33.313 Removing: /var/run/dpdk/spdk_pid101203 00:36:33.313 Removing: /var/run/dpdk/spdk_pid101483 00:36:33.313 Removing: /var/run/dpdk/spdk_pid101755 00:36:33.313 Removing: /var/run/dpdk/spdk_pid101758 00:36:33.313 Removing: /var/run/dpdk/spdk_pid104108 00:36:33.313 Removing: /var/run/dpdk/spdk_pid104521 00:36:33.313 Removing: /var/run/dpdk/spdk_pid104880 00:36:33.313 Removing: /var/run/dpdk/spdk_pid105483 00:36:33.313 Removing: /var/run/dpdk/spdk_pid105485 00:36:33.313 Removing: /var/run/dpdk/spdk_pid105870 00:36:33.313 Removing: /var/run/dpdk/spdk_pid105890 00:36:33.313 Removing: /var/run/dpdk/spdk_pid105904 00:36:33.313 Removing: /var/run/dpdk/spdk_pid105935 00:36:33.313 Removing: /var/run/dpdk/spdk_pid105944 00:36:33.313 Removing: /var/run/dpdk/spdk_pid106085 00:36:33.313 Removing: /var/run/dpdk/spdk_pid106087 00:36:33.313 Removing: /var/run/dpdk/spdk_pid106195 00:36:33.313 Removing: /var/run/dpdk/spdk_pid106197 00:36:33.313 Removing: /var/run/dpdk/spdk_pid106310 00:36:33.313 Removing: /var/run/dpdk/spdk_pid106322 00:36:33.313 Removing: /var/run/dpdk/spdk_pid106836 00:36:33.313 Removing: /var/run/dpdk/spdk_pid106879 00:36:33.313 Removing: /var/run/dpdk/spdk_pid107030 00:36:33.313 Removing: /var/run/dpdk/spdk_pid107147 00:36:33.313 Removing: /var/run/dpdk/spdk_pid107592 00:36:33.313 Removing: /var/run/dpdk/spdk_pid107830 00:36:33.313 Removing: /var/run/dpdk/spdk_pid108347 00:36:33.313 Removing: /var/run/dpdk/spdk_pid109034 00:36:33.313 Removing: /var/run/dpdk/spdk_pid110401 00:36:33.313 Removing: /var/run/dpdk/spdk_pid111054 00:36:33.313 Removing: /var/run/dpdk/spdk_pid111060 00:36:33.313 Removing: /var/run/dpdk/spdk_pid113089 00:36:33.314 Removing: /var/run/dpdk/spdk_pid113179 00:36:33.314 Removing: /var/run/dpdk/spdk_pid113263 00:36:33.314 Removing: /var/run/dpdk/spdk_pid113336 00:36:33.314 Removing: /var/run/dpdk/spdk_pid113488 00:36:33.314 Removing: /var/run/dpdk/spdk_pid113577 00:36:33.314 Removing: /var/run/dpdk/spdk_pid113663 00:36:33.314 Removing: /var/run/dpdk/spdk_pid113742 00:36:33.314 Removing: /var/run/dpdk/spdk_pid114126 00:36:33.314 Removing: /var/run/dpdk/spdk_pid114883 00:36:33.314 Removing: /var/run/dpdk/spdk_pid116289 00:36:33.314 Removing: /var/run/dpdk/spdk_pid116477 00:36:33.314 Removing: /var/run/dpdk/spdk_pid116770 00:36:33.314 Removing: /var/run/dpdk/spdk_pid117316 00:36:33.314 Removing: /var/run/dpdk/spdk_pid117699 00:36:33.314 Removing: /var/run/dpdk/spdk_pid120174 00:36:33.314 Removing: /var/run/dpdk/spdk_pid120220 00:36:33.314 Removing: /var/run/dpdk/spdk_pid120581 00:36:33.314 Removing: /var/run/dpdk/spdk_pid120626 00:36:33.314 Removing: /var/run/dpdk/spdk_pid121027 00:36:33.314 Removing: /var/run/dpdk/spdk_pid121617 00:36:33.314 Removing: /var/run/dpdk/spdk_pid122039 00:36:33.314 Removing: /var/run/dpdk/spdk_pid123074 00:36:33.314 Removing: /var/run/dpdk/spdk_pid124115 00:36:33.314 Removing: /var/run/dpdk/spdk_pid124232 00:36:33.314 Removing: /var/run/dpdk/spdk_pid124291 00:36:33.314 Removing: /var/run/dpdk/spdk_pid125898 00:36:33.314 Removing: /var/run/dpdk/spdk_pid126221 00:36:33.314 Removing: /var/run/dpdk/spdk_pid126556 00:36:33.314 Removing: /var/run/dpdk/spdk_pid127130 00:36:33.576 Removing: /var/run/dpdk/spdk_pid127135 00:36:33.576 Removing: /var/run/dpdk/spdk_pid127548 00:36:33.576 Removing: /var/run/dpdk/spdk_pid127702 00:36:33.576 Removing: /var/run/dpdk/spdk_pid127859 00:36:33.576 Removing: /var/run/dpdk/spdk_pid127972 00:36:33.576 Removing: /var/run/dpdk/spdk_pid128134 00:36:33.576 Removing: /var/run/dpdk/spdk_pid128243 00:36:33.576 Removing: /var/run/dpdk/spdk_pid128964 00:36:33.576 Removing: /var/run/dpdk/spdk_pid128995 00:36:33.576 Removing: /var/run/dpdk/spdk_pid129030 00:36:33.576 Removing: /var/run/dpdk/spdk_pid129284 00:36:33.576 Removing: /var/run/dpdk/spdk_pid129319 00:36:33.576 Removing: /var/run/dpdk/spdk_pid129349 00:36:33.576 Removing: /var/run/dpdk/spdk_pid129818 00:36:33.576 Removing: /var/run/dpdk/spdk_pid129852 00:36:33.576 Removing: /var/run/dpdk/spdk_pid130308 00:36:33.576 Removing: /var/run/dpdk/spdk_pid130460 00:36:33.576 Removing: /var/run/dpdk/spdk_pid130495 00:36:33.576 Removing: /var/run/dpdk/spdk_pid71561 00:36:33.576 Removing: /var/run/dpdk/spdk_pid71709 00:36:33.576 Removing: /var/run/dpdk/spdk_pid71959 00:36:33.576 Removing: /var/run/dpdk/spdk_pid72051 00:36:33.576 Removing: /var/run/dpdk/spdk_pid72072 00:36:33.576 Removing: /var/run/dpdk/spdk_pid72176 00:36:33.576 Removing: /var/run/dpdk/spdk_pid72192 00:36:33.576 Removing: /var/run/dpdk/spdk_pid72326 00:36:33.576 Removing: /var/run/dpdk/spdk_pid72595 00:36:33.576 Removing: /var/run/dpdk/spdk_pid72773 00:36:33.576 Removing: /var/run/dpdk/spdk_pid72858 00:36:33.576 Removing: /var/run/dpdk/spdk_pid72952 00:36:33.576 Removing: /var/run/dpdk/spdk_pid73036 00:36:33.576 Removing: /var/run/dpdk/spdk_pid73069 00:36:33.576 Removing: /var/run/dpdk/spdk_pid73105 00:36:33.576 Removing: /var/run/dpdk/spdk_pid73169 00:36:33.576 Removing: /var/run/dpdk/spdk_pid73270 00:36:33.576 Removing: /var/run/dpdk/spdk_pid73868 00:36:33.576 Removing: /var/run/dpdk/spdk_pid73926 00:36:33.576 Removing: /var/run/dpdk/spdk_pid73990 00:36:33.576 Removing: /var/run/dpdk/spdk_pid74018 00:36:33.576 Removing: /var/run/dpdk/spdk_pid74086 00:36:33.576 Removing: /var/run/dpdk/spdk_pid74095 00:36:33.576 Removing: /var/run/dpdk/spdk_pid74163 00:36:33.576 Removing: /var/run/dpdk/spdk_pid74191 00:36:33.576 Removing: /var/run/dpdk/spdk_pid74237 00:36:33.576 Removing: /var/run/dpdk/spdk_pid74267 00:36:33.576 Removing: /var/run/dpdk/spdk_pid74313 00:36:33.576 Removing: /var/run/dpdk/spdk_pid74345 00:36:33.576 Removing: /var/run/dpdk/spdk_pid74494 00:36:33.576 Removing: /var/run/dpdk/spdk_pid74524 00:36:33.576 Removing: /var/run/dpdk/spdk_pid74601 00:36:33.576 Removing: /var/run/dpdk/spdk_pid75086 00:36:33.576 Removing: /var/run/dpdk/spdk_pid75479 00:36:33.576 Removing: /var/run/dpdk/spdk_pid77973 00:36:33.576 Removing: /var/run/dpdk/spdk_pid78024 00:36:33.576 Removing: /var/run/dpdk/spdk_pid78354 00:36:33.576 Removing: /var/run/dpdk/spdk_pid78403 00:36:33.576 Removing: /var/run/dpdk/spdk_pid78809 00:36:33.576 Removing: /var/run/dpdk/spdk_pid79383 00:36:33.576 Removing: /var/run/dpdk/spdk_pid79811 00:36:33.576 Removing: /var/run/dpdk/spdk_pid80857 00:36:33.576 Removing: /var/run/dpdk/spdk_pid81926 00:36:33.576 Removing: /var/run/dpdk/spdk_pid82043 00:36:33.576 Removing: /var/run/dpdk/spdk_pid82112 00:36:33.576 Removing: /var/run/dpdk/spdk_pid83715 00:36:33.576 Removing: /var/run/dpdk/spdk_pid84057 00:36:33.576 Removing: /var/run/dpdk/spdk_pid91288 00:36:33.576 Removing: /var/run/dpdk/spdk_pid91716 00:36:33.576 Removing: /var/run/dpdk/spdk_pid92320 00:36:33.576 Removing: /var/run/dpdk/spdk_pid92855 00:36:33.576 Removing: /var/run/dpdk/spdk_pid98295 00:36:33.576 Removing: /var/run/dpdk/spdk_pid98774 00:36:33.576 Removing: /var/run/dpdk/spdk_pid98883 00:36:33.576 Removing: /var/run/dpdk/spdk_pid99029 00:36:33.576 Removing: /var/run/dpdk/spdk_pid99080 00:36:33.576 Removing: /var/run/dpdk/spdk_pid99134 00:36:33.576 Removing: /var/run/dpdk/spdk_pid99181 00:36:33.576 Removing: /var/run/dpdk/spdk_pid99352 00:36:33.576 Removing: /var/run/dpdk/spdk_pid99506 00:36:33.576 Removing: /var/run/dpdk/spdk_pid99797 00:36:33.576 Removing: /var/run/dpdk/spdk_pid99901 00:36:33.576 Clean 00:36:33.837 06:38:53 -- common/autotest_common.sh@1453 -- # return 0 00:36:33.837 06:38:53 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:33.837 06:38:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:33.837 06:38:53 -- common/autotest_common.sh@10 -- # set +x 00:36:33.837 06:38:53 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:33.837 06:38:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:33.837 06:38:53 -- common/autotest_common.sh@10 -- # set +x 00:36:33.837 06:38:53 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:33.837 06:38:53 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:36:33.837 06:38:53 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:36:33.837 06:38:53 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:33.837 06:38:53 -- spdk/autotest.sh@398 -- # hostname 00:36:33.837 06:38:53 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:36:34.098 geninfo: WARNING: invalid characters removed from testname! 00:37:00.755 06:39:17 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:01.326 06:39:20 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:03.240 06:39:22 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:05.155 06:39:24 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:07.699 06:39:26 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:09.614 06:39:29 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:12.160 06:39:31 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:12.160 06:39:31 -- spdk/autorun.sh@1 -- $ timing_finish 00:37:12.160 06:39:31 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:37:12.160 06:39:31 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:12.160 06:39:31 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:37:12.160 06:39:31 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:37:12.160 + [[ -n 5748 ]] 00:37:12.160 + sudo kill 5748 00:37:12.171 [Pipeline] } 00:37:12.190 [Pipeline] // timeout 00:37:12.196 [Pipeline] } 00:37:12.213 [Pipeline] // stage 00:37:12.219 [Pipeline] } 00:37:12.233 [Pipeline] // catchError 00:37:12.242 [Pipeline] stage 00:37:12.244 [Pipeline] { (Stop VM) 00:37:12.254 [Pipeline] sh 00:37:12.535 + vagrant halt 00:37:15.155 ==> default: Halting domain... 00:37:20.460 [Pipeline] sh 00:37:20.744 + vagrant destroy -f 00:37:23.288 ==> default: Removing domain... 00:37:23.875 [Pipeline] sh 00:37:24.162 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:37:24.173 [Pipeline] } 00:37:24.189 [Pipeline] // stage 00:37:24.195 [Pipeline] } 00:37:24.210 [Pipeline] // dir 00:37:24.215 [Pipeline] } 00:37:24.230 [Pipeline] // wrap 00:37:24.236 [Pipeline] } 00:37:24.250 [Pipeline] // catchError 00:37:24.260 [Pipeline] stage 00:37:24.262 [Pipeline] { (Epilogue) 00:37:24.275 [Pipeline] sh 00:37:24.564 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:29.847 [Pipeline] catchError 00:37:29.848 [Pipeline] { 00:37:29.858 [Pipeline] sh 00:37:30.139 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:30.139 Artifacts sizes are good 00:37:30.150 [Pipeline] } 00:37:30.165 [Pipeline] // catchError 00:37:30.176 [Pipeline] archiveArtifacts 00:37:30.183 Archiving artifacts 00:37:30.282 [Pipeline] cleanWs 00:37:30.291 [WS-CLEANUP] Deleting project workspace... 00:37:30.291 [WS-CLEANUP] Deferred wipeout is used... 00:37:30.299 [WS-CLEANUP] done 00:37:30.301 [Pipeline] } 00:37:30.317 [Pipeline] // stage 00:37:30.322 [Pipeline] } 00:37:30.335 [Pipeline] // node 00:37:30.341 [Pipeline] End of Pipeline 00:37:30.384 Finished: SUCCESS